CS194 Final Project

Intro to Computer Vision & Computational Photography: Anderson Lam, Amy Huang

Lightfield Camera

Overview: In this project we will be using the Stanford Light Field Archive to reproduce depth refocusing and aperature adjustments using shifting and averaging. We followed Ren Ng's paper as well.


Depth Refocusing

a = [-5, 1]

Faraway objects do not move as much with the different camera angles compared to closer objects- and we are able to utilize this and take into account the camera placement to implement the depth refocusing. We first parsed the camera index from the image file names- this was our (x, y) coordinate which we used measure from the index of (8, 8), the center image. The amount of shifting is calculated by multiplying by an alpha Δ(x_1, y_1) = a * (x_0, y_0)

Here is the following averaged amethyst photos with a = -1, 0, 1 respectively

a = -1
a = 0
a = 1



On the right shows the values of a from -3 to 2. As you can see, increasing the alpha focused the front of the object while decreasing it focused on the back. When alpha = 0, that means we did not do any additional/scaled shifting- it is just an average of the dataset's images.

Let's also look at the Jellybean and Chess dataset!

Here is the alpha values -5 to 1 on the jellybean image dataset. We shifted our alpha value down because of how the jelly beans are pretty far back in the images.

Jellybean dataset

a = -3
a = -1
a =1

Chess dataset

a = -3
a = 0
a = 3

Aperature Adjustment

r = [0, 10]

To implement aperature adjustment, we want to average the images perpendicular to the optical axis. A bigger aperature is created with more perpendicular images. We use a radius variable to indicate a bigger or smaller aperature. By adding images from different angles, we acculumate the rays to a certain part of the image, and are able to mimick the opening of aperature in a camera.

Amethyst Dataset

r = 0
r = 10
r = [0, 10]

Jellybean Dataset

r = 0
r = 10
r = [0, 11]

Chess Dataset

r = 0
r = 10
r = [0, 11]

Summary: Through this project we learned how to refocus and adjust an image after looking through already processed images. The idea is very straightforward and makes a lot of sense, although I never would have known how this works otherwise.