Epson Panorama Awards 2014 : Judging/Entrant perspective


Firstly, I would like to thank David Evans for inviting me to judge the amateur awards in this year’s competition. It has been a great experience to see the talent on offer as well as gain some experience from the other side of competitions! After reviewing 1600+ images, my average score was 73 with a highest of 88 and a lowest of 20. From recall, I gave 3 gold scores, approximately 50-60 silver scores and approximately 300 bronze scores across both categories. There was a clustering of scores between 70-74 for images I felt were technically fine but lacked that something extra to gain an award.  The judges were able to view all of the entries in thumbnail form to have a rough idea of the overall standard. I also used it to identify images I would be ‘looking out’ for at the top end of the field. Once judging commenced, we could give a score and could not go back to reviewing our scored images.

The way I looked at a picture was broken down into a few areas , some of which were conscious decisions, some of which were more unconscious but which I will try to articulate in words.

Before any technical aspect was considered, I took an overall look at the image and thought to myself “If I had entered this image, would I have expected an ‘award’?”

If the answer was yes, then it’s a question of how high! For me , I considered a silver as an image I would be happy to see in the top 50 showcase based on the quality of previous years’ works.  I reserved gold for images I thought could win the category. (I should add that of the images I entered, I was not expecting to win or gain anything more than silver awards and for one of them, I was not expecting any award)

If the answer was no, then the score would depend on just how far short it was of that standard and for what reasons. Here are some of the themes of shortcomings bearing in mind that this is only my view as one of 5 judges.

  • 1. The pano for pano’s sake: Often there were images that encompassed a wide vista yet there was really no focal point. An example would be an image of a field, blue skies, no particular foreground structure, no particularly interesting light and not minimalistic enough to give a  minimalistic vibe.
Score 67: Suffers from pano for pano's sake, mono for mono's sake, perhaps overdone blurred reflection amongst a stitching error I only saw later as well!
Score 67: Suffers from pano for pano’s sake, mono for mono’s sake, perhaps overdone blurred reflection amongst a stitching error I only saw later as well!
  • 2. The crop for crop’s sake: There were many images that appeared to be crops and on many occasions , this worked well. When successful, it seemed clear that the intent of the image was always panoramic but just shot as a conventional image. Many images however, appeared to be forced into the 2:1 aspect to allow entry into the competition. There would be awesome sweeping lines that appeared to have an origin from the bottom of the frame that were cut off or elements in the background that were placed very high in the image as a result of the crop.
  • 3. Light for light’s sake. I guess this is pretty explanatory. If an image is going to be of colourful clouds alone, aurora, or silhouetted rays , it really needed to be extraordinary for it to work well. Often these type of sky heavy shots worked well when there was a small amount of foreground that framed the image often with some kind of structure to give immense scale to nature (eg.  barn, tree etc)
Score 70: I honestly thought this would do better. Maybe the lone tree had been overdone, maybe a little too boring, maybe the judges thought the sky was fake?? (it isn't)
Score 70: I honestly thought this would do better. Maybe the lone tree had been overdone, maybe a little too boring, maybe the judges thought the sky was fake?? (it isn’t)
  • 4. The Milky way bow. I have done many milky way panoramas, I will continue to do them but I probably will not enter any into competitions. The reason being that apart from a northern hemisphere or southern hemisphere milky way, the stars themselves are practically the same image with something different underneath. I did give some high scores to images which were done well but overall, the standard of this subgenre was disappointing. The main issues were high iso noise, garish light painting, muddy foregrounds due to either focus or noise issues and an emphasis of the milky way itself without much thought to the importance of foreground elements.
Milky way panorama with light painting. Is the foreground strong enough without the milky way? I did not enter this image.
  • 5. The web resolution ‘cover up’ As judges, we had the ability to zoom images to full resolution . If an image looked dodgy even in the smaller thumbnail, the flaws were always magnified on closer inspection. I wondered if some entrants took a calculated risk by entering images with crop errors, small areas out of focus, areas affected by motion blur etc. This was a shame because some images I considered for very high scores appeared very degraded in closer inspection either due to noise issues, or the artefacts mentioned above.
Score 71: Suffers from pano for panos sake. Framing for framing's sake (tree on left) and apart from the reflection perhaps no real focal point. And it is a relatively well known image of ours.
Score 71: Suffers from pano for panos sake. Framing for framing’s sake (tree on left) and apart from the reflection perhaps no real focal point. And it is a relatively well known image of ours.
  • 6. Over processed images: Perhaps careless processing might be a better term. This is a fine line to tread but some there were some techniques I found off putting when used forcefully. Over-brightening of shadows resulting in a contrast-less image, blue desaturation  , severe halos from dodging dark areas, ‘light bleed’ effects, nuclear greens and reds, muddy highlight reductions etc. When used well, they appeared natural but when forced upon the viewer, I often felt like I was being punched in the eyes!
Score 76: I tried to tone down the colours in this one for presentation - one quick way to do this is to view any gamut warnings (Ctrl-Shift-Y) in photoshop. 2 image stitch taken with 70-200 F2.8II lens.
Score 76: I tried to tone down the colours in this one for presentation – one quick way to do this is to view any gamut warnings (Ctrl-Shift-Y) in photoshop. 2 image stitch taken with 70-200 F2.8II lens. This scored approximately what I thought it would.
  • 7. Icons: I tried hard not to penalise an image just because it was icon. However, one aspect of photography that I feel is important is some sense of putting your own stamp of individuality on scenes. So when presented with dozens of Mesa Arch images with almost identical composition and lighting it was hard for me to overcome innate biases. We were also asked to consider the perceived difficulty of obtaining an image. Let’s face it, as pleasant as that iconic shot may appear, it is not as difficult to reproduce a standard composition as it is to produce something unique from a familiar scene.
Score 78: I wonder if what worked for this image is that it is a very frequently shot location with Mt Rainier visible but in this take, that aspect is removed which perhaps gives it some originality? I think this image scored about where I thought it would.
Score 78: I wonder if what worked for this image is that it is a very frequently shot location with Mt Rainier visible. In this version, that aspect is removed which perhaps gives it some originality? I think this image scored about where I thought it would.
  • 8. The ‘well known’ image. Personally, I had seen many of the images I was judging online prior to judging. Many of them were outstanding images and I had no issue with rewarding the images with what I felt was my initial excitement upon viewing them. Perhaps there’s an X factor to judging an image which has never been revealed before but I honestly feel that as a judge in an international competition, we should not have been penalising entrants for wanting to share their images online prior to competition entry.
Score 81: I had thought this to be the third best of my entries but was glad that it received a silver! This scene has been shot before but from the usual platform, it is a lower perspective with parts of the distant stacks obscured. This was a stitched pano of 5 images taken with 70-200 F2.8II
Score 81: I had thought this to be the third best of my entries but was glad that it received a silver! This scene has been shot before but from the usual platform, it is a lower perspective with parts of the distant stacks obscured. This was a stitched pano of 5 images taken with 70-200 F2.8II

Overall, I’m never 100%  happy with anything short of a WIN but honestly, given the lack of panoramas shot in the last year and reduced shooting time with 2 kids running around, I’m very happy to have received these scores. For those reading about the critique, feel free to add your say. Remember, the points listed are just my views and each judge will have a different view. If we all judged in exactly the same manner, there might as well only be one judge! Thanks again to the pano awards organisers for running this competition and I hope I can produce something worthy of the winners galleries in the next 12 months!

-D

 

17 thoughts on “Epson Panorama Awards 2014 : Judging/Entrant perspective

  1. Great comments here Dylan and having been a judge for many years in this comp, you echo many of my thoughts (in fact the Milky Way observation got me in big trouble with the AIPP last year!).

  2. Hey mate. Great article mate. Although slightly disappointed with my score in the opens I do feel that the judging was much more consistent this year. I find the point about about web res images and a calculated risk. I always struggle with how to sharpen for comps due to the fact I’m unaware of what the viewing size is. Any who thanks for the insight from the other side.

      1. I think for this comp, it was easier because they stated 3000 pix long edge and we could go down to that resolution at 100%. It’s not so much the sharpening but stuff like OOF and motion blur that was disappointing (particularly with one aspen shot that really looked the goods!)

  3. As usual…a very thoughtful and articulate recap of your thought process. And also as usual, you provide some good input from which we can all learn to improve our approach to our own work. I like the analogy of being punched in the eye by nuclear reds and greens. 🙂

  4. Hi Dylan, interesting write-up and I fully agree with your overall reasoning and thinking about how you dealt with the judging process. Of course it is much more difficult these days to produce something original and different to what you have seen before (in particular of iconic locations) and there is nothing wrong in using a well-known composition. But from a viewers and judging perspective I will get ‘boring’ very quickly, therefore the award winning images of an international competition need to stand really out, so that they become truly inspirational for all of us.

    Guido

    1. Thanks Guido! It was a risk putting in the icons so I thought I’d steer away. It really does have an affect when you try to mark 1 after the other of the same scene. Human nature probably does tend to win out even while trying utmost to be objective about it.

Leave a comment