What is the Right Resolution for Life-Sized Prints?

One of our readers asked,

I came from a background of small print projects. Now I’m tasked to find photography art for five doors, each 12 x 14 inches. When it comes to finding images, I don’t know where to start. What would be the best size? Best resolution?

Printing press worker

For a start, there’s plenty of royalty-free images that you can find from Pexels, Pixabay, or Unsplash. Most photographs from these sources would have good lighting as well. Hence you just need to pick the ones which has enough pixels for your given print project.

Since our dear reader was looking for images to place on doors, I’m going to assume that people would view these images no closer than at an arm’s length. This would be about two feet or 24 inches.

Why does the distance matter? Image resolution is not just how many dots to put in a given inch (the dot per inch or pixel per inch metric), but also how far people would view them. The further the viewing distance, the less pixel density would be required such that people who view them doesn’t perceive the image as blurry, blocky, or otherwise be able to identify the individual pixels (dots) in the image.

What you need is a function that estimates the lower bound pixel density given an expected viewing distance. In other words, a formula that converts a distance in inches into a minimum dots-per-inch density value.

q = f(x)



  • x  — the expected distance between the viewer and the image, in inches.
  • q  — the minimum image resolution, in dots-per-inch or pixels-per-inch.

To derive the formula, first let’s draw a schematic of the problem. The image below shows a human eye (representing the viewer) that is looking at an image of height y at distance x. Now the angle that the eye makes between the bottom of the image and its top is . Let’s keep things simple and two dimensional for now, because making calculations for the width of the image would be exactly the same way, only on a different axis.

Eyesight model

We know that magazines are usually printed at 300 dpi (dots per inch). This resolution is effective when a person with normal eyesight is viewing it on a standard reading distance, which is one foot (12 inches). This is probably also the reason why the first retina-display iPhone debuted at that approximate pixel density, which is 326 ppi (pixels per inch). Similarly a retina-display MacBook Pro has only 220 ppi but looks just as good because of the longer viewing distance. Of course, some people has better eyesights and can appreciate higher resolution images whereas some others can’t tell the difference between iPhone 4 and iPhone 3G (before retina display) screens. But just take the middle ground for simplicity’s sake.

Now let’s plug in those standard numbers into our model. Let’s take an image of 1 inch high which is viewed at a distance of 12 inches. The image contains 300 pixels, which is the minimum required to “look good”.

Standard image model

Then we need to find angle that the eye makes between looking at the bottom and the top edge of the image. This would be useful to calculate the pixel per degree value later on. Using knowledge from high-school trigonometry, we get…

tan(⍺)  = y/x
tan(⍺)  = 1 inch / 12 inch

⍺ = arctan(1/12)
⍺ ≅ 4.76°


We’ll define p as the pixel-per-degree ratio as follows:

p = q / ⍺ 



  • p — the pixel-per-degree ratio
  • q — the number of pixels (or printed dots).
  •  — the angle which contains those pixels.

Our 1-inch image has the bare minimum of q=300 pixels (dots), making it a 300dpi image. Let’s plug those numbers in.

p = q / ⍺ 
p = 300 pixels / 4.76°
p ≅ 62.98 pixels/degree 


That means that for every degree of eye movement there need to be at least p ≅ 62.98 pixels between them so that the eye can’t differentiate the individual pixels.

Now we want an arbitrary distance x to calculate how many pixels q that our 1-inch image can look acceptable. We solve the two equations above for the value of q.

p = q / ⍺ 
q = p * ⍺

⍺ = arctan(y/x)
q = p * arctan(y/x)


We already know the following values derived from our standard model of 1 inch image at a resolution of 300 pixels per inch viewed at 12 inch distance.

y = 1 inch
p = 62.98 pixels / degree


Plug those numbers in and you get the formula to calculate the minimum resolution given an expected viewing distance. This is the formula that we’ve been looking for, the function to convert a viewing distance to a minimum pixel density value.

q = 62.98 * arctan(1 / x)



  • q — the minimum resolution in pixels or dots per inch.
  • x — the expected distance between the image and the viewer.

Then let’s go back to our dear reader’s initial problem. He/she would need to find images to fill 12 * 14 inch surfaces on doors. We’re assuming people won’t observe these doors closer than an arm’s length (or otherwise risking getting hit by the door if someone else is opening it from the other side), thus the viewing distance is about 24 inches (two feet).

q = 62.98 * arctan(1/24) ≅ 150 pixels per inch.


Given these we can calculate the minimum pixel dimensions of the images for the doors:

width  = 12 inches * 150 pixels per inch = 1800 pixels
height = 14 inches * 150 pixels per inch = 2100 pixels


Fortunately most images in Pexels, Pixabay, or Unsplash has more pixels than the minimum dimensions above and thus would look good to be printed on doors.

Do you find this useful? Sign up in the form to the right to get future posts delivered to your e-mail for free.

Choosing between SWIFT and Remittance

When you are a migrant worker, you probably need to move some money back to your home country. There are relatively recent offerings of cheaper money-transfer services, typically advertised as remittance – both from banks or specialist companies. Some even promises free of transaction charges. Then you might wonder, with so many options how can you choose the best one? What are the difference between these services and regular telegraphic transfers?

Continue reading

Reactive time tracking of a day’s consulting work

Do you need to keep track time but find that the act of time tracking itself often gets in the way of your flow? Do you need to find the emergent patterns of your work day? That is at what times of the day that you find the most productive and at what other times you can’t seem to get anything done? Perhaps you need to control the amount of time you fool around in Facebook or just stumbling around the web?

We’re working on an application that does reactive time tracking — that is, it silently monitors what you are doing and then let you later classify your time into various projects. The idea came from David Seah’s paper-based emergent time tracker series, and we aim to make a more automated version of it.

We call this app Time Fairy because just like a fairy, it silently sits in the background and keep notes while you are working. It sees the applications that you are running and records the time you interact with each application. It also gently reminds you to keep being productive. At the end of the workday, you can pull up a report of that day and you can allocate your activities to the various projects that you are working on.

Time Fairy works best for desk-bound information workers. People like technology freelancers, web designers, writers, and internet journalists should find Time Fairy useful. Conversely, Time Fairy won’t be much help for those who spend more than 50% of their work time walking around.

A prototype daily report of Time Fairy is shown below. As you can see, Time Fairy caters for multitaskers. That is, it allows you to overbook your time and work on multiple projects at the same time.


Let’s say that the report above shows a sample work day of Jane Doe, a web journalist. She has three projects at hand:

  • Project A is a web-video presentation piece.
  • Project B is preparing for a conference presentation.
  • Project C is a number-crunching and data analysis piece.

At the start of the day, she spend some time reading and writing e-mails with her correspondent and the e-mails were primarily for Project A and Project B. At around 10:00 she starts writing intensively in MS Word and also browse the web using Safari to source her materials for Project A. She also create her web presentation for Project A initially using PowerPoint. Both Project A and Project B shares some materials, so she double-book the some of her time in PowerPoint doing both projects.

Just after noon, she goes out for lunch and this is shown as a large “break” time as her computer is idle and screen-locked.

At about 14:00 she had her lunch and again reads some of her e-mail. This time the e-mails are primarily for Project B and Project C. Then at around 15:00 she sources for materials from the web using Safari and crunch the numbers she got using Excel.

Late in the afternoon at around 16:00, she realized that she needs to complete Project A’s web video. She already converted the PowerPoint presentation into video format and now starts touching it up in iMovie. While the movie is being rendered, she also continue working on Project C calculations in Excel while sourcing data from Safari. Then at 18:00 she completed today’s tasks for Project C and Project A and then open Mail to compose and send the results.

Sounds interesting? Tell us what you think of it in the comments. Better yet, sign up to be a beta tester for Time Fairy.