Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
superior choices only
yoshotography
yosotography
yostography
yosography
yosgraphy
yosraphy
View Results
 
  • Post
  • Reply
Sagebrush
Feb 26, 2012

echinopsis posted:

they used to make TWO of these per year lmao



the white looks sick as gently caress

weak. they only made three of these ever:

Adbot
ADBOT LOVES YOU

Sagebrush
Feb 26, 2012

long lenses are kind of fun once in a while but i don't get the people who are obsessed with longer and longer and bigger ones. i think they're probably creeps who plan to use them to spy on girls.

a good lens should be small, fast, and radioactive, like this one:

Sagebrush
Feb 26, 2012

in the end it's a miniscule dose. i read a swedish radiology institute study that concluded that a professional photographer carrying a thorium lens on his hip for 6 hours a day, 250 days a year would only increase his radiation dose to 130% of normal sea-level background. and it's mostly low-energy beta, so it's blocked by the metal components of the camera or the fabric of your camera bag anyway.

some old cameras from the 30s have thorium eyepieces, though, and those you'd wanna avoid since putting the radiation source right up against your eyeball is potentially more dangerous than holding it in your hand.

Sagebrush
Feb 26, 2012

there is enough radiation to see, though! if i mount this lens on my mirrorless camera, put on the lens cap, and set it to maximum sensitivity and 1:1 zoom, i occasionally see single white pixels of noise flickering in the live view. i assume this is from particles hitting the sensor, because it doesn't happen with other lenses that are confirmed non-radioactive. spooky

Sagebrush
Feb 26, 2012

yeah, that glittery effect where in-focus areas are highlighted is called focus peaking and it's awesome. it's the only digital manual focusing aid that is as good as a 1970s split-prism.

focus peaking + sensor-shift stabilization + ISO one million has brought a whole new life to my old takumars. highly recommend picking up a body with those features if you like to use old manual focus lenses.

Sagebrush fucked around with this message at 20:59 on Dec 9, 2021

Sagebrush
Feb 26, 2012

there was this awesome time in about 2003-2008 when digital cameras were taking off but digital SLRs hadn't yet, and everyone was just dumping their old SLR lenses at every goodwill. i bought a ton of lenses for next to nothing back then. combining the stuff i bought with my dad's old stuff i now have an example of nearly every lens in the 1970s asahi lineup, from 17/4 to 400/5.6



my favorites are:

35/2, sharp at f/4 and below but soft and glowy with spherical aberration wide open, great for candid stuff
50/1.4, duh, and the thorium glass makes it basically as sharp as a modern L lens while being literally a third the size
135/2.5, excellent portrait lens albeit heavy, and just a huge impressive chunk of glass in general

and man. if you wanna talk knobfeel, pick up one of these lenses designed around manual focus use and run it through the range. no modern lens feels as good.

Sagebrush fucked around with this message at 21:12 on Dec 9, 2021

Sagebrush
Feb 26, 2012

akadajet posted:

where’s sagebrush to tell us how this isn’t how a blimp works?

I'm pretty sure that is how a blimp works when you fly it into the side of an observatory

Sagebrush
Feb 26, 2012

i think i made this point before, but it's funny to me how photographers are all about having the most perfect unaltered image from the camera, perfectly circular bokeh with a gaussian falloff, reduced dynamic range capture, zero flare, zero grain, no chromatic aberration or vignetting or pincushion or blown highlights or any other image-making artifact

and meanwhile cg artists are like gently caress yeah, give me all that poo poo, i want hexagon bokeh, donut bokeh, flare everywhere, halos and ghosts, tri-color chromatic aberration, ultra grainy, film scratches, motion blur, contrast blaster, micro depth of field, hunting for focus, just give me every poo poo thing that cameras do to gently caress up an image. because that makes my cg look realistic

Sagebrush
Feb 26, 2012

yeh it changed its name to instagram

Sagebrush
Feb 26, 2012

Unless your lighting is dramatically changing from shot to shot, you can usually just meter and do a few test shots in manual mode and leave it there. Up to a stop or two of exposure shift is completely negligible with modern software.

Sagebrush
Feb 26, 2012

polyester concept posted:

I am probably going to stick with aps-c until medium format drops a bit more and then skip full frame entirely :2bong:

this is exactly what good ol' k-rock says you should do :jeb:

i don't really see the need for medium format digital outside professional studio work. historically you'd use it for the greatly increased resolution, but 24-30 mp is enough resolution for nearly any purpose and you can get that in 135/full frame or even a crop sensor no problem.

obviously the enhanced light-gathering of the medium format sensor is great, and the decreased noise, but modern sensors are just ridiculously good at low-light regardless of size, and the huge medium format lenses don't tend to be that fast anyway so you're just kinda trading it off.

like it's fuckin bananas that i can go out in the dark with my z6 and shoot a photo at ISO 25000, 1/8 second at 135mm, handheld (with sensor stabilization) and it is not only well lit but completely sharp and still has less noise than a frame of Tri-X. that was completely impossible even, idk, like 6 years ago. fast lenses are now basically just bokeh generators

so on that note the one unique thing a mf camera can do i guess is give you the medium format bokeh, but again, really only something you'd notice in a portrait studio. idk

fart simpson posted:

those new dji cameras with the gimbal built in have a lidar focus system that looks pretty neat

it's neat, but what's the purported advantage of this over phase detection?

Sagebrush fucked around with this message at 02:38 on Dec 17, 2021

Sagebrush
Feb 26, 2012

lidar focus reminds me of an anecdote about the F-117 stealth fighter: after they completed the prototype, they took a bunch of pictures of it, and the photos all came out blurry. after some head-scratching they realized that their polaroid camera used an ultrasonic rangefinder to measure the distance to the subject, and the plane's faceted shape was reflecting the ultrasound pulses away from the emitter just like it was designed to do with radar.

Sagebrush
Feb 26, 2012

echinopsis posted:

what
has anyone at all even said they are a nikon noodler

i have a nikon. the z6 is basically a sony a7iii with better ergonomics, though.

i also have an old canon 5d2, and all my lenses are pentax

Sagebrush
Feb 26, 2012

fluorite is crystalline, so it's not a glass.

Sagebrush
Feb 26, 2012

Get a fill flash for that first one

Sagebrush
Feb 26, 2012

here's a cool article i read about how smartphone photography works these days. there are far more computational techniques going on behind the scenes than i was aware of. obvious stuff like exposure bracketing and color correction, but also poo poo like focus bracketing, subpixel stacking, shutter coding, computational depth-mapping, ai convolution. it's nuts.

https://www.dpreview.com/articles/9828658229/computational-photography-part-i-what-is-computational-photography



in essence, good smartphone cameras (iphones, google pixels) don't necessarily have better optical hardware than other phones. they take the same relatively crappy input that other phones capture (physics is physics) and do extensive computational image synthesis to make the output look great. most of the improvements in phone camera quality in the last few years are just from having enough ram and processor power to pull this off.

i wonder what would happen if you combined these techniques with a full-frame sensor and a big lens?

Sagebrush
Feb 26, 2012

Jenny Agutter posted:

Major thing is throughput needs to be much much higher for a camera, an iphone 13 pro has 12mp sensors while professional cameras have 45-100mp.

who says it has to be on a 100-megapixel camera? there are plenty of 20-30 mp full-frame cameras out there being used professionally. glom together an iphone 13 and a nikon z6 and see what happens.

i think the stumbling block is that many of the techniques rely on having multiple slightly offset cameras, or microlenses on the sensors. that's fundamentally different from the traditional camera layout.

Sagebrush fucked around with this message at 21:05 on Jan 4, 2022

Sagebrush
Feb 26, 2012

MrQueasy posted:

I think you underestimate the rift between the camera grognards and the "common people who just want to take pictures". After Canon and Nikon slept too long through the 2000s, there's a dwindling amount of people who are willing to embrace more automation in their "big cameras". The market that's left is focused on glass and sensor quality and shooting in Manual Mode Only.

i would love to have all these computational features on my Z6. just have a "green square plus" mode where i can swipe between portrait/night shot/macro/etc just like on a phone and let the thing do all its instagram AI magic. or even better, let me enable and disable specific processing techniques. maybe i do want focus stacking on this one, and ML driven local tone mapping, but skip the auto-exposure bracketing because i'm going for a certain mood. i wouldn't want this as my only shooting mode, of course, but having the option there can only be a benefit.

i think the whole situation raises the question of truth in photography. what is the truest photo? we have this idea that the camera is objective and infallible -- that the image it captures is reality, and that deviating from that original capture is a violation of the image's truth. but is this the only definition of truth?

say i see the most gorgeous electric orange sunset of my life. i take a picture of it and look at it on my phone. it doesn't look right to me; the colors aren't right and it just doesn't glow like it did in real life. should i shrug and say "well, the camera must be right, i shouldn't mess with it?" or should i play with the colors and make it look like i remember? is there even an objective truth for the camera to capture? it doesn't see the same wavelengths i do. it can't even see orange light! the orange that exists in the image is a completely fictional artifact, captured as a balance of red and green responses on the sensor, processed using some algorithm a nikon engineer came up with, and displayed in red and green light on a screen that my eye happens to blend into orange. when i think of it that way, the idea of "true" colors being the ones right out of the camera is absurd. my objective truth is what i saw, and i feel completely justified in editing the image to make it look like that.

and then i wonder -- how far can i go? anything that makes the image closer to what i perceive as ideal is fine, and increases the image's value and truth. all the exposure bracketing and focus stacking and sharpening techniques are allowed. i don't see grain, or defocus, or motion blur (well, sometimes) in real life -- so unless i am trying to use those effects on purpose, there's nothing wrong with taking them out or working them over.

what about synthetic lighting? my eyes don't react to light the same way a camera's sensor does. my wife and i are out at the bar and i see her in the most beautiful soft glow. i take a picture of her and it's harsh and doesn't capture the mood. is it wrong to use the synthetic portrait lighting feature to make her look like what i saw?

what about outright editing the subjects of the image? my cousin has a giant pimple on her nose in the family photo. am i obligated to leave it in, or should i remove it? what is closer to her truth? does her self-image involve a pimple on her nose? is there a difference between her asking me to do it, me doing it on my own, or the camera doing it without either of our involvement?

and to take it even further, what is the difference between doing this in software and in camera? portrait photographers have lighting setups that allow them to tweak the image in exquisite detail, and i'm sure all the pros and grognards are fine with that. is it more "true" to life if they do it with $20,000 in flashes instead of dragging the synthetic lighting bubble around?

when i was younger i, too, had the idea that what the camera makes is inviolable. that you can do what you want with the camera, but once you press that shutter button the image is done, and anything past that is just cheap trickery. i got over that. to hell with the idea that you can only make the image with the camera optics. i see photography as an imperfect representation of a situation i remember, and i'll do whatever the hell i want to make it feel the way i prefer.

Sagebrush
Feb 26, 2012

Progressive JPEG posted:

when it comes to what you get from a camera, are "raw" formats actually "heres the ccd charges/photon counts" or is it just "skipped the compression step"

It's not quite the photon counts, but it's close. It is a record of exactly what brightness value was reported by every photosite on the sensor, to whatever bit depth the ADC allows (perhaps 16 or 18 bits). Because of the Bayer filter, each site is only recording red, green, or blue light alone. To get a full color image, you need to process the RAW file using a model of the specific sensor configuration and filter behavior.



Image 2 is what the RAW file contains. In image 3 the filter colors are applied, and image 4 shows one sort of interpolation to estimate the RGB value at each pixel. So even if you "don't do any processing", the conversion from a Bayer pattern image to a full color one involves some blending and blurring, and the "true color" produced depends entirely on the algorithm you use. The RAW has the greatest amount of information about the light that the camera saw, and hence the most flexibility.

Sagebrush fucked around with this message at 09:31 on Jan 5, 2022

Sagebrush
Feb 26, 2012

Follow up post: there is a different type of sensor, trade named Foveon, that doesn't require a Bayer filter. In a Foveon sensor, three photosites are stacked on top of one another at each pixel location, and the system exploits the different penetration depths of different colors of light into silicon to separate the colors. This means every pixel captures true RGB data, so there is no blurring or interpolation required, and the resulting image is noticeably sharper.



I think only Sigma cameras ever used these, and they have pretty much died out today. I dunno why they never took off. I guess it's more challenging to make the stacked sensors and Bayer algorithms are good enough.

Sagebrush
Feb 26, 2012

echi is talking about aperture synthesis.

https://en.wikipedia.org/wiki/Aperture_synthesis

basically the concept there is: a telescope's resolution is limited by the size of its entry pupil (aperture, main mirror, etc). wouldn't it be great if we could build a telescope that was, like, a mile in diameter, and get insane resolution? well we can't. but mathematicians figured out that if we put a whole pile of small sensors in a field a mile across, we could take the low-resolution signals from each one and computationally combine them into a much higher resolution image. the Very Large Array, with its 27 radio telescopes as seen in Contact, is an implementation of this in the radio spectrum.

just putting two sensors a mile apart doesn't give you a good image, though. two sensors forms an interferometer, which is a useful tool by itself, but it won't produce a sharp spatial image. the more small sensors you can put into that mile wide field, the better the effective resolution you can achieve. theoretically, if you had an infinite number of small sensors in that field, each one capturing a single tiny low-res view of the sky, you could synthesize a high-resolution image that is exactly equivalent to building one mile-wide mirror, even though none of the sensors acquired data that sharp. in reality, you can't build infinite sensors either, so you put in as many as you can and get a resolution that is partway between small sensor and mile-wide mirror.

this concept of a million sensors all capturing the same subject from a slightly different angle is (almost*) exactly the plenoptic camera that fart simpson mentioned. you put a grid of micro-lenses over the sensor and capture thousands of tiny (say 10-pixel diameter) fisheye views of the scene, each one subtly different, and then process those into one image. so echi, yes, this already exists in camera technology, though it isn't common by any means. there was a company called Lytro that went out of business a few years ago that made plenoptic cameras. they were a neat trick but i don't think anyone ever found a use for them.

on a macroscopic scale, optical aperture synthesis is much less practical. people are doing it for astronomy, but nobody's going to be walking around with a pizza box covered in 10,000 camera sensors to simulate having a lens 18 inches across any time soon. there just isn't really a need for that resolution. it would be a pretty good optics/photonics phd project though i bet.

-------

focus stacking, on the other hand, is exactly what mrqueasy says it is: quickly take a bunch of images at different focus settings, then computationally stack the sharpest parts of each one to simulate a much larger depth of field than the lens allows. very useful in situations like macro photography. theoretically you could also arbitrarily change the point of focus, the depth of field, and the quality of the bokeh after the fact. kinda neat i guess?

Sagebrush fucked around with this message at 22:11 on Jan 5, 2022

Sagebrush
Feb 26, 2012

idk, i agree that when it's just dickwaving about specs or lenses it's boring, but the computational photography stuff is neat

Sagebrush
Feb 26, 2012

it has been known for a very long time that the compressed perspective of a long lens is more flattering. that's why short, fast telephotos are called portrait lenses. i've often wondered why the flattened look is more appealing. i think it might just be the relative absence of perspective distortion -- certainly a portrait shot with a wide lens, where the person's forehead and chin slope away and their nose looks huge, is not flattering. eliminate that and the picture looks better.

some people take it to ridiculous extents. i once saw a photographer at the beach taking pictures of a model from 100 feet away with like a 500mm lens. he was giving her instructions with a walkie-talkie. lol

Sagebrush
Feb 26, 2012

I made a couple of custom tone maps for my camera that make it look sort of like Kodachrome and sort of like Tri-X, and honestly why would you need anything else

Sagebrush
Feb 26, 2012

putting a uv filter on your lens is like permanently shooting through a window. you shouldn't do it unless you are, like, covering the dakar rally and expect to have sand and gravel flying at you.

if you must do it, get a really good multi-coated hoya one or something so it at least is as optically good as the rest of your lens.

Sagebrush
Feb 26, 2012

i thought that models usually got paid for their time. you're saying this model is expecting you to charge her for the photos?

is she really a model, or just a lady who wants pictures for her instagram? who is contracting whose services here? what sort of power dynamics are at work?

i guess it could go both ways. annie leibovitz probably isn't paying people to pose for her, and cindy crawford (or whoever, idk who famous models are today) isn't paying photographers to take pictures of her. how does it work if it's a famous model and a famous photographer together? i would guess that maybe neither of them pay each other up front, but both enter into a contract to share the profits from the resulting work? but that only works when they're on the same "level," so to speak. some random dude isn't going to get cindy crawford to model for him for profit-sharing.

what happens if you take a hilarious bad photo of this girl and want to show it, but she doesn't want it to be publicized? have you worked this out on paper beforehand?

seems complicated and kinda fraught.

Sagebrush
Feb 26, 2012

Kazinsal posted:

by the end of the wedding everyone is way too fuckin smashed to be able to tell if a photo is just good or the best thing ever committed to film

go ham, my bro

uh, pretty sure they're sober by the time the proofs come back, and then the bride gets to decide that you ruined the most important day of her life

Sagebrush
Feb 26, 2012

Modern digital sensors objectively have more dynamic range than film, but like vinyl records versus CDs, some people like the effects you get with the analog medium better. Film and digital are both linear and comparable in the middle of their range, but at the extremes film experiences reciprocity failure and kind of rolls off in a way that's different from digital clipping. Compare an overdriven analog tube amp to an overdriven digital signal. Neither one is more "correct," but the analog one might sound more pleasant.

Also digital is better at capturing detail in deep shadows and film has better handling of blown highlights, but you probably knew that.

Sagebrush
Feb 26, 2012

no you aren't. it's sitting on the floor

Sagebrush
Feb 26, 2012

yeah, 135mm is a portrait lens even on full-frame.

i mean you could always do that thing i posted about before and just stand 30 feet away and yell all your directions.

Sagebrush
Feb 26, 2012

that's the lens case. it says right on it

Sagebrush
Feb 26, 2012

https://i.imgur.com/f9bWoCE.mp4

Sagebrush
Feb 26, 2012

you are correct that the depth of field varies with focal length, aperture, and the distance to the focal point. when you focus more closely your depth of field is narrower, for a given lens configuration and aperture diameter.

i don't know if there's a rule of thumb for it. you can certainly calculate it mathematically, though, and there are lookup tables, paper slide rule calculators, and i assume phone apps now to help. old manual lenses have a scale on them to help you figure it out:



the red diamond is the index mark. the lens is set to f/8 on the aperture ring, and focused on a point 2 meters / 6.5 feet away. the scale on either side of the diamond gives you the depth of field, so you can see that at f/8 and this focus distance, the depth of field is between about 1.2m and 7m.

note that depth of field is perceptual. there is only ever one plane that is truly in perfect focus, and what we call the depth of field is the range that is acceptably sharp. the scale on this lens is based on what asahi engineers figured was about right given the scenario (i.e. shooting on film, handheld, and viewing prints at reasonable enlargements), so if you are using a 50-mp camera on a tripod and pixel peeping, that scale may be too loose for you. but it's a good starting point.

there are two other neat things about this scale. first, the reason the 8, 10 and 3 are red is because those represent the hyperfocal settings. if you put the lens on f/8 and focus it to 3 meters, you'll see that one end of the 8 mark on the scale is at infinity -- so everything from infinity down to 1.5m will be in acceptable focus. the idea is that you can set it there, choose a shutter speed that fits your lighting, and forget about it. point and shoot at any subject more than 1.5 meters away. in the image above, the infinity symbol is over the 11, so you need to be at f/11 at this focal distance to have everything out to the horizon in focus. easy!

second, the little R to the left of the scale is the infrared focus mark. all light is bent by lenses slightly differently depending on wavelength; this is how a prism separates light into a rainbow, and why you get chromatic aberration on edges. the red light bends less than the blue light, they end up in slightly different spots on the film or sensor, and sharp edges start to form a rainbow. some people like to do infrared photography, but we can't see IR, so if you look through the lens and focus it in visible light, the IR image will be defocused. instead, you focus in visible light, note where the index mark is, and turn it so the IR mark is at the same point before taking the picture.

i regret that new lenses (other than super professional ones) don't have these scales. i know autofocus is better and faster for all practical purposes, but i just like the thoughtful mechanical nature of manual lenses.

:eng101:

goddammit beaten again for once again putting too much effort into my posts

Sagebrush fucked around with this message at 19:43 on Feb 6, 2022

Sagebrush
Feb 26, 2012

i think IR photography was more popular back in the film days because all you had to do to screw around with it was buy a roll of IR film. a lot harder now that you need a specially modified camera.

it makes neat photos. most plants reflect strongly in IR to stay cool, so their leaves all come out white:



it also makes skin lighter, turning caucasians ghostly pale and making them look anywhere from ethereal to terrifying:



you can get most of the blemish-removing effects of IR just by using a red filter. people still look paler and smoother but not like a banshee.

people these days don't seem to use colored filters with black and white photography all that much. i guess because black and white is just "saturation = 0" in all these apps. but you can dramatically change the look of your work and boost your cred by shooting with a filter, or at least mixing the channels in post to simulate one. i keep an orange filter in my camera bag for when i am feeling particularly artsy and want to shoot black and white in-camera; orange gives you some skin-smoothing effects and also darkens blue skies for super dramatic ansel adams style clouds, as below. just remember to set your white balance to daylight so the camera doesn't autocorrect it away!

Sagebrush fucked around with this message at 05:04 on Feb 7, 2022

Sagebrush
Feb 26, 2012

oh and here's another fun effect, at the opposite end of the scale. blue filters tend to exaggerate differences in skin tones, making all your wrinkles and spots show up with high contrast. and if you go past the blue end and shoot in ultraviolet, every little tiny deposit of melanin -- which of course is there specifically to absorb UV -- becomes obvious.



shooting in UV is even harder than IR, though, because glass lenses are opaque to it. you have to use special lenses made of fused quartz instead.

Sagebrush
Feb 26, 2012

echinopsis posted:

can someone correct me if I am wrong but if you're shooting raw this won't matter because you can always change white balance in post?

that's correct, but i'm talking about shooting black and white in camera rather than processing it afterwards. idk. a lot of the time these days i just leave it on jpeg because i want to send the photos to someone on instagram later that day without doing all the raw workflow poo poo.

if you do shoot raw, you still need to set your white balance to daylight, giving you a bright orange or red photograph where warm-toned areas (e.g. skin) become lighter and cool areas (skies) get darker. then you just desaturate that for the black and white image.

you can also do this by taking the color image and playing with the channel mixer, but there is some loss of tone resolution in the conversion that you won't have if you filter it at the lens. and of course then you gotta have a computer and photoshop with you. but maybe that doesn't matter, and the channel mixer does let you fine-tune it more.

decisions

Sagebrush
Feb 26, 2012

echinopsis posted:

yeah good idea

although the "correct" white balance isn't always what's best, but suppose this is just another place where subjectivity and personal style is expressed

there is no such thing as an objectively correct white balance. there isn't even such a thing as white light. it's all perceptual. even if you balance it to what most humans consider neutral under most circumstances, the photo is still going to look different under different external lighting, etc. older people will see it more orange than younger people, because as the lens of the eye ages it starts to turn slightly yellow.

i generally like my photos a little on the warm side of "neutral" white.

Sagebrush
Feb 26, 2012


"Uh...I think that guy over there is taking pictures of you"

"Oh my God, gross, where"

Sagebrush
Feb 26, 2012

i too would warm it up, and futz with the contrast and saturation sliders. looks like it was shot on a gray overcast day. maybe it was, but we have the technology to make it seem otherwise

Adbot
ADBOT LOVES YOU

Sagebrush
Feb 26, 2012

here's a few minutes in photoshop. i am not a professional portrait retoucher or anything, but i think it looks better.



minor curve adjustment to boost the skin tones, increase contrast, and brighten image overall; color balance to add yellow (enhances the hair and gold frames) and remove magenta (redness in skin), also giving a slight warming effect; mask in the hat from the original photo, because it's close to washed out already and the curved version blows it out; apply my secret magic sharpening technique for crispness. the sharpening might be overboard for a glamour shot but eh that's a personal taste thing. i kinda like grain.

Sagebrush fucked around with this message at 02:11 on Mar 13, 2022

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply