Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
superior choices only
yoshotography
yosotography
yostography
yosography
yosgraphy
yosraphy
View Results
 
  • Post
  • Reply
echinopsis
Apr 13, 2004



Adbot
ADBOT LOVES YOU

echinopsis
Apr 13, 2004



is it obvious what this is?

Jenny Agutter
Mar 18, 2009



No. some sort of granola. or cereal maybe

spooky ghost
Feb 11, 2020





Lipstick Apathy

echinopsis
Apr 13, 2004



Jenny Agutter posted:

No. some sort of granola. or cereal maybe

apple crumble. guess it was easier to guess than I thought

Sagebrush
Feb 26, 2012

Well, actually...

here's a cool article i read about how smartphone photography works these days. there are far more computational techniques going on behind the scenes than i was aware of. obvious stuff like exposure bracketing and color correction, but also poo poo like focus bracketing, subpixel stacking, shutter coding, computational depth-mapping, ai convolution. it's nuts.

https://www.dpreview.com/articles/9828658229/computational-photography-part-i-what-is-computational-photography



in essence, good smartphone cameras (iphones, google pixels) don't necessarily have better optical hardware than other phones. they take the same relatively crappy input that other phones capture (physics is physics) and do extensive computational image synthesis to make the output look great. most of the improvements in phone camera quality in the last few years are just from having enough ram and processor power to pull this off.

i wonder what would happen if you combined these techniques with a full-frame sensor and a big lens?

Gentle Autist
Jun 4, 2003



Sagebrush posted:

here's a cool article i read about how smartphone photography works these days. there are far more computational techniques going on behind the scenes than i was aware of. obvious stuff like exposure bracketing and color correction, but also poo poo like focus bracketing, subpixel stacking, shutter coding, computational depth-mapping, ai convolution. it's nuts.

https://www.dpreview.com/articles/9828658229/computational-photography-part-i-what-is-computational-photography



in essence, good smartphone cameras (iphones, google pixels) don't necessarily have better optical hardware than other phones. they take the same relatively crappy input that other phones capture (physics is physics) and do extensive computational image synthesis to make the output look great. most of the improvements in phone camera quality in the last few years are just from having enough ram and processor power to pull this off.

i wonder what would happen if you combined these techniques with a full-frame sensor and a big lens?

this owns. apple make a micro 4/3 camera with an MFT mount and an A15, thanks

Jenny Agutter
Mar 18, 2009



Major thing is throughput needs to be much much higher for a camera, an iphone 13 pro has 12mp sensors while professional cameras have 45-100mp. the Nikon z9 actually has a stacked sensor/logic/RAM that allows 20fps capture at 45mp and it uses deep learning for autofocus. I'm sure stuff like auto HDR isn't far off, although those raw files are going to be ludicrously huge

Sagebrush
Feb 26, 2012

Well, actually...

Jenny Agutter posted:

Major thing is throughput needs to be much much higher for a camera, an iphone 13 pro has 12mp sensors while professional cameras have 45-100mp.

who says it has to be on a 100-megapixel camera? there are plenty of 20-30 mp full-frame cameras out there being used professionally. glom together an iphone 13 and a nikon z6 and see what happens.

i think the stumbling block is that many of the techniques rely on having multiple slightly offset cameras, or microlenses on the sensors. that's fundamentally different from the traditional camera layout.

Sagebrush fucked around with this message at 20:05 on Jan 4, 2022

MrQueasy
Nov 15, 2005

Quit shakin' me, kid!

Sagebrush posted:

i wonder what would happen if you combined these techniques with a full-frame sensor and a big lens?

LOL, the purists would absolutely catch fire and begin the holy war.

I think you underestimate the rift between the camera grognards and the "common people who just want to take pictures". After Canon and Nikon slept too long through the 2000s, there's a dwindling amount of people who are willing to embrace more automation in their "big cameras". The market that's left is focused on glass and sensor quality and shooting in Manual Mode Only.

akadajet
Sep 14, 2003



all cameras are good under godís eye

Jimmy Carter
Nov 3, 2005

THIS MOTHERDUCKER
FLIES IN STYLE


the spec sheet for cameras is locked 3-5 years in advance, thus why it takes them so long to do the simplest consumer-friendly things like being able to charge via USB, or supporting USB-C. Hell, it took almost a year to get a firmware patch enabling 24fps for the Canon 5D MkII after that blew up in the marketplace. There's little indication there's been any structural changes since then.

Asking them to use more modern algorithms for denoising is laffo.

echinopsis
Apr 13, 2004



MrQueasy posted:

LOL, the purists would absolutely catch fire and begin the holy war.

I think you underestimate the rift between the camera grognards and the "common people who just want to take pictures". After Canon and Nikon slept too long through the 2000s, there's a dwindling amount of people who are willing to embrace more automation in their "big cameras". The market that's left is focused on glass and sensor quality and shooting in Manual Mode Only.

citation needed

Jonny 290
May 5, 2005




[ASK] me about OS/2 Warp


i dont think megapixel races are the way forward. I was extremely delighted when i took a macro shot of my cat and realized that i could see the greenpos screen from my laptop in the reflection on her eye.

the "all cameras must be 100mp by 2024 or else" has big "clock speeds are going up - by 2019 we'll all have 14ghz processors" energy

echinopsis
Apr 13, 2004



MrQueasy
Nov 15, 2005

Quit shakin' me, kid!

echinopsis posted:

citation needed

I admit that it's anecdotal, but the only reason I still have my dslr is because I enjoy knowing that I put my own spin on the limitations inherent to my kit. Whether that's managing reds so they don't swamp my sensor, or deciding on an aperture/speed combo for a scene, or whatever.

If I wanted to focus on composition and processing, I can do that on my phone for the majority of things... :shrug: I dunno, it just feels like camera innovation has moved to phones for now, and the big camera creators (Canon, Sony, Nikon) are all too conservative to risk losing the Loyalists.

Jenny Agutter
Mar 18, 2009



Jonny 290 posted:

i dont think megapixel races are the way forward. I was extremely delighted when i took a macro shot of my cat and realized that i could see the greenpos screen from my laptop in the reflection on her eye.

the "all cameras must be 100mp by 2024 or else" has big "clock speeds are going up - by 2019 we'll all have 14ghz processors" energy

Who are you talking to man?

echinopsis
Apr 13, 2004



MrQueasy posted:

I admit that it's anecdotal, but the only reason I still have my dslr is because I enjoy knowing that I put my own spin on the limitations inherent to my kit. Whether that's managing reds so they don't swamp my sensor, or deciding on an aperture/speed combo for a scene, or whatever.

If I wanted to focus on composition and processing, I can do that on my phone for the majority of things... :shrug: I dunno, it just feels like camera innovation has moved to phones for now, and the big camera creators (Canon, Sony, Nikon) are all too conservative to risk losing the Loyalists.

eh see what I see is new videographers embracing new mirrorless cameras and thatís only growing

once people worked out the modern dslrs and mirrorless cameras did great video itís just exploded


I really think mirrorless has revitalised the scene. itís totally revitalised photography for me


the local TFP scenes etc here are exclusively non-phone. you might get great composition on a phone but youíre really limited without good glass for getting a photo with a vibe. at least in portrait photography


I also wish for collaboration. cameras finally have good face tracking but drat if apple had been running the show? gently caress..




all just anecdotal too. probably just shows the communities we see

Sagebrush
Feb 26, 2012

Well, actually...

MrQueasy posted:

I think you underestimate the rift between the camera grognards and the "common people who just want to take pictures". After Canon and Nikon slept too long through the 2000s, there's a dwindling amount of people who are willing to embrace more automation in their "big cameras". The market that's left is focused on glass and sensor quality and shooting in Manual Mode Only.

i would love to have all these computational features on my Z6. just have a "green square plus" mode where i can swipe between portrait/night shot/macro/etc just like on a phone and let the thing do all its instagram AI magic. or even better, let me enable and disable specific processing techniques. maybe i do want focus stacking on this one, and ML driven local tone mapping, but skip the auto-exposure bracketing because i'm going for a certain mood. i wouldn't want this as my only shooting mode, of course, but having the option there can only be a benefit.

i think the whole situation raises the question of truth in photography. what is the truest photo? we have this idea that the camera is objective and infallible -- that the image it captures is reality, and that deviating from that original capture is a violation of the image's truth. but is this the only definition of truth?

say i see the most gorgeous electric orange sunset of my life. i take a picture of it and look at it on my phone. it doesn't look right to me; the colors aren't right and it just doesn't glow like it did in real life. should i shrug and say "well, the camera must be right, i shouldn't mess with it?" or should i play with the colors and make it look like i remember? is there even an objective truth for the camera to capture? it doesn't see the same wavelengths i do. it can't even see orange light! the orange that exists in the image is a completely fictional artifact, captured as a balance of red and green responses on the sensor, processed using some algorithm a nikon engineer came up with, and displayed in red and green light on a screen that my eye happens to blend into orange. when i think of it that way, the idea of "true" colors being the ones right out of the camera is absurd. my objective truth is what i saw, and i feel completely justified in editing the image to make it look like that.

and then i wonder -- how far can i go? anything that makes the image closer to what i perceive as ideal is fine, and increases the image's value and truth. all the exposure bracketing and focus stacking and sharpening techniques are allowed. i don't see grain, or defocus, or motion blur (well, sometimes) in real life -- so unless i am trying to use those effects on purpose, there's nothing wrong with taking them out or working them over.

what about synthetic lighting? my eyes don't react to light the same way a camera's sensor does. my wife and i are out at the bar and i see her in the most beautiful soft glow. i take a picture of her and it's harsh and doesn't capture the mood. is it wrong to use the synthetic portrait lighting feature to make her look like what i saw?

what about outright editing the subjects of the image? my cousin has a giant pimple on her nose in the family photo. am i obligated to leave it in, or should i remove it? what is closer to her truth? does her self-image involve a pimple on her nose? is there a difference between her asking me to do it, me doing it on my own, or the camera doing it without either of our involvement?

and to take it even further, what is the difference between doing this in software and in camera? portrait photographers have lighting setups that allow them to tweak the image in exquisite detail, and i'm sure all the pros and grognards are fine with that. is it more "true" to life if they do it with $20,000 in flashes instead of dragging the synthetic lighting bubble around?

when i was younger i, too, had the idea that what the camera makes is inviolable. that you can do what you want with the camera, but once you press that shutter button the image is done, and anything past that is just cheap trickery. i got over that. to hell with the idea that you can only make the image with the camera optics. i see photography as an imperfect representation of a situation i remember, and i'll do whatever the hell i want to make it feel the way i prefer.

fart simpson
Jul 2, 2005




Sagebrush posted:

here's a cool article i read about how smartphone photography works these days. there are far more computational techniques going on behind the scenes than i was aware of. obvious stuff like exposure bracketing and color correction, but also poo poo like focus bracketing, subpixel stacking, shutter coding, computational depth-mapping, ai convolution. it's nuts.

https://www.dpreview.com/articles/9828658229/computational-photography-part-i-what-is-computational-photography



in essence, good smartphone cameras (iphones, google pixels) don't necessarily have better optical hardware than other phones. they take the same relatively crappy input that other phones capture (physics is physics) and do extensive computational image synthesis to make the output look great. most of the improvements in phone camera quality in the last few years are just from having enough ram and processor power to pull this off.

i wonder what would happen if you combined these techniques with a full-frame sensor and a big lens?

barely scratching the surface here, but these guys did something kinda like that. they did something vaguely similar to these techniques on an original iphone camera's pictures to "compare" that to a new iphone

https://www.youtube.com/watch?v=_gXSLJfwfwk

echinopsis
Apr 13, 2004



Sagebrush posted:

i would love to have all these computational features on my Z6. just have a "green square plus" mode where i can swipe between portrait/night shot/macro/etc just like on a phone and let the thing do all its instagram AI magic. or even better, let me enable and disable specific processing techniques. maybe i do want focus stacking on this one, and ML driven local tone mapping, but skip the auto-exposure bracketing because i'm going for a certain mood. i wouldn't want this as my only shooting mode, of course, but having the option there can only be a benefit.

i think the whole situation raises the question of truth in photography. what is the truest photo? we have this idea that the camera is objective and infallible -- that the image it captures is reality, and that deviating from that original capture is a violation of the image's truth. but is this the only definition of truth?

say i see the most gorgeous electric orange sunset of my life. i take a picture of it and look at it on my phone. it doesn't look right to me; the colors aren't right and it just doesn't glow like it did in real life. should i shrug and say "well, the camera must be right, i shouldn't mess with it?" or should i play with the colors and make it look like i remember? is there even an objective truth for the camera to capture? it doesn't see the same wavelengths i do. it can't even see orange light! the orange that exists in the image is a completely fictional artifact, captured as a balance of red and green responses on the sensor, processed using some algorithm a nikon engineer came up with, and displayed in red and green light on a screen that my eye happens to blend into orange. when i think of it that way, the idea of "true" colors being the ones right out of the camera is absurd. my objective truth is what i saw, and i feel completely justified in editing the image to make it look like that.

and then i wonder -- how far can i go? anything that makes the image closer to what i perceive as ideal is fine, and increases the image's value and truth. all the exposure bracketing and focus stacking and sharpening techniques are allowed. i don't see grain, or defocus, or motion blur (well, sometimes) in real life -- so unless i am trying to use those effects on purpose, there's nothing wrong with taking them out or working them over.

what about synthetic lighting? my eyes don't react to light the same way a camera's sensor does. my wife and i are out at the bar and i see her in the most beautiful soft glow. i take a picture of her and it's harsh and doesn't capture the mood. is it wrong to use the synthetic portrait lighting feature to make her look like what i saw?

what about outright editing the subjects of the image? my cousin has a giant pimple on her nose in the family photo. am i obligated to leave it in, or should i remove it? what is closer to her truth? does her self-image involve a pimple on her nose? is there a difference between her asking me to do it, me doing it on my own, or the camera doing it without either of our involvement?

and to take it even further, what is the difference between doing this in software and in camera? portrait photographers have lighting setups that allow them to tweak the image in exquisite detail, and i'm sure all the pros and grognards are fine with that. is it more "true" to life if they do it with $20,000 in flashes instead of dragging the synthetic lighting bubble around?

when i was younger i, too, had the idea that what the camera makes is inviolable. that you can do what you want with the camera, but once you press that shutter button the image is done, and anything past that is just cheap trickery. i got over that. to hell with the idea that you can only make the image with the camera optics. i see photography as an imperfect representation of a situation i remember, and i'll do whatever the hell i want to make it feel the way i prefer.

thanks sagebrush I enjoyed reading that.

I feel the same way but youíve done a real
good job of expressing it. you should write a book

MrQueasy
Nov 15, 2005

Quit shakin' me, kid!

gently caress "truth", I want to start from as close to what the camera saw in its own idiosyncratic way and choose each "fix" manually. I don't want some deep dream thing adding detail that was never there unless I told it to, or it's some un-fixable artifact generated by them mechanics of the camera itself.

Progressive JPEG
Feb 19, 2003



when it comes to what you get from a camera, are "raw" formats actually "heres the ccd charges/photon counts" or is it just "skipped the compression step"

MrQueasy
Nov 15, 2005

Quit shakin' me, kid!

Progressive JPEG posted:

when it comes to what you get from a camera, are "raw" formats actually "heres the pixel charges/photon counts" or is it just "skipped the compression step"

IIRC it's closer to the second but with more values per pixel than a standard bitmap, though every camera company and sensor combination has its own quirks.

echinopsis
Apr 13, 2004



its the data the sensor generates. so the entirety of the dynamic range is retained and there are no transforms done. just an array of pixel data

echinopsis
Apr 13, 2004



MrQueasy posted:

gently caress "truth", I want to start from as close to what the camera saw in its own idiosyncratic way and choose each "fix" manually. I don't want some deep dream thing adding detail that was never there unless I told it to, or it's some un-fixable artifact generated by them mechanics of the camera itself.

what I want to generate is raw camera data and achieve everything camera end by taking a different picture

i want to play around with individual computes afterward

Sagebrush
Feb 26, 2012

Well, actually...

Progressive JPEG posted:

when it comes to what you get from a camera, are "raw" formats actually "heres the ccd charges/photon counts" or is it just "skipped the compression step"

It's not quite the photon counts, but it's close. It is a record of exactly what brightness value was reported by every photosite on the sensor, to whatever bit depth the ADC allows (perhaps 16 or 18 bits). Because of the Bayer filter, each site is only recording red, green, or blue light alone. To get a full color image, you need to process the RAW file using a model of the specific sensor configuration and filter behavior.



Image 2 is what the RAW file contains. In image 3 the filter colors are applied, and image 4 shows one sort of interpolation to estimate the RGB value at each pixel. So even if you "don't do any processing", the conversion from a Bayer pattern image to a full color one involves some blending and blurring, and the "true color" produced depends entirely on the algorithm you use. The RAW has the greatest amount of information about the light that the camera saw, and hence the most flexibility.

Sagebrush fucked around with this message at 08:31 on Jan 5, 2022

fart simpson
Jul 2, 2005




echinopsis posted:

what I want to generate is raw camera data and achieve everything camera end by taking a different picture

i want to play around with individual computes afterward

that article professor sagebrush posted up there talks about plenoptic cameras that basically use a bunch of microlenses behind the main lens to capture a "light field" instead of an image. kind of like taking the same photo from a bunch of slightly different angles at once, optically. he says given a light field you can calculate any possible individual image within that field, which allows you potentially to change the angle, focus, aperture, etc to a limited extent. these would be real optical changes as a post processing step, not just faking it with blur filters and crap

Sagebrush
Feb 26, 2012

Well, actually...

Follow up post: there is a different type of sensor, trade named Foveon, that doesn't require a Bayer filter. In a Foveon sensor, three photosites are stacked on top of one another at each pixel location, and the system exploits the different penetration depths of different colors of light into silicon to separate the colors. This means every pixel captures true RGB data, so there is no blurring or interpolation required, and the resulting image is noticeably sharper.



I think only Sigma cameras ever used these, and they have pretty much died out today. I dunno why they never took off. I guess it's more challenging to make the stacked sensors and Bayer algorithms are good enough.

fart simpson
Jul 2, 2005




maybe good enough for the blurry crap photos you take

josh04
Oct 19, 2008





yeah, the raw values come straight from the sensor and the post-processing you do on it mirrors the processing the camera would do internally: debayer, anti-alias, a matrix from the camera manufacturer that converts sensor values into linear light, sharpening. if you write your own raw decoder you can do have this fun yourself.

lots of film and tv people were snooty about bayer pattern for a long time because in theory you're losing chroma resolution but it really does work extremely well (and also our eyes don't give a poo poo about chroma resolution). making debayering better in code turned out to be easier than maintaining accurate optical beam splitters for 3-ccd.

foveon sensor is neat but it's more complicated to make, more expensive and debatably higher quality - sticking another object in front of your green sensors isn't going to help low-light performance.

lightfields are very cool but permanently five years away

echinopsis
Apr 13, 2004



fart simpson posted:

that article professor sagebrush posted up there talks about plenoptic cameras that basically use a bunch of microlenses behind the main lens to capture a "light field" instead of an image. kind of like taking the same photo from a bunch of slightly different angles at once, optically. he says given a light field you can calculate any possible individual image within that field, which allows you potentially to change the angle, focus, aperture, etc to a limited extent. these would be real optical changes as a post processing step, not just faking it with blur filters and crap

Iím curious how effective this is when talking about the huge out of focus effects you get with really wide apertures. Iíd love to play around with it to see what it could do .

Iíve heard of the idea of using multiple small cameras in the same way multiple radio telescopes are used to form space images that would otherwise require telescopes kilometres (i may be exaggerating) in width

ie generate the image from many images spatially distributed and construct something similar again

MrQueasy
Nov 15, 2005

Quit shakin' me, kid!

echinopsis posted:

Iím curious how effective this is when talking about the huge out of focus effects you get with really wide apertures. Iíd love to play around with it to see what it could do .

Iíve heard of the idea of using multiple small cameras in the same way multiple radio telescopes are used to form space images that would otherwise require telescopes kilometres (i may be exaggerating) in width

ie generate the image from many images spatially distributed and construct something similar again

You mean "Focus Stacking"?

Gentle Autist
Jun 4, 2003



Sagebrush posted:

i would love to have all these computational features on my Z6. just have a "green square plus" mode where i can swipe between portrait/night shot/macro/etc just like on a phone and let the thing do all its instagram AI magic. or even better, let me enable and disable specific processing techniques. maybe i do want focus stacking on this one, and ML driven local tone mapping, but skip the auto-exposure bracketing because i'm going for a certain mood. i wouldn't want this as my only shooting mode, of course, but having the option there can only be a benefit.

i think the whole situation raises the question of truth in photography. what is the truest photo? we have this idea that the camera is objective and infallible -- that the image it captures is reality, and that deviating from that original capture is a violation of the image's truth. but is this the only definition of truth?

say i see the most gorgeous electric orange sunset of my life. i take a picture of it and look at it on my phone. it doesn't look right to me; the colors aren't right and it just doesn't glow like it did in real life. should i shrug and say "well, the camera must be right, i shouldn't mess with it?" or should i play with the colors and make it look like i remember? is there even an objective truth for the camera to capture? it doesn't see the same wavelengths i do. it can't even see orange light! the orange that exists in the image is a completely fictional artifact, captured as a balance of red and green responses on the sensor, processed using some algorithm a nikon engineer came up with, and displayed in red and green light on a screen that my eye happens to blend into orange. when i think of it that way, the idea of "true" colors being the ones right out of the camera is absurd. my objective truth is what i saw, and i feel completely justified in editing the image to make it look like that.

and then i wonder -- how far can i go? anything that makes the image closer to what i perceive as ideal is fine, and increases the image's value and truth. all the exposure bracketing and focus stacking and sharpening techniques are allowed. i don't see grain, or defocus, or motion blur (well, sometimes) in real life -- so unless i am trying to use those effects on purpose, there's nothing wrong with taking them out or working them over.

what about synthetic lighting? my eyes don't react to light the same way a camera's sensor does. my wife and i are out at the bar and i see her in the most beautiful soft glow. i take a picture of her and it's harsh and doesn't capture the mood. is it wrong to use the synthetic portrait lighting feature to make her look like what i saw?

what about outright editing the subjects of the image? my cousin has a giant pimple on her nose in the family photo. am i obligated to leave it in, or should i remove it? what is closer to her truth? does her self-image involve a pimple on her nose? is there a difference between her asking me to do it, me doing it on my own, or the camera doing it without either of our involvement?

and to take it even further, what is the difference between doing this in software and in camera? portrait photographers have lighting setups that allow them to tweak the image in exquisite detail, and i'm sure all the pros and grognards are fine with that. is it more "true" to life if they do it with $20,000 in flashes instead of dragging the synthetic lighting bubble around?

when i was younger i, too, had the idea that what the camera makes is inviolable. that you can do what you want with the camera, but once you press that shutter button the image is done, and anything past that is just cheap trickery. i got over that. to hell with the idea that you can only make the image with the camera optics. i see photography as an imperfect representation of a situation i remember, and i'll do whatever the hell i want to make it feel the way i prefer.

loving pass the joint dude

echinopsis
Apr 13, 2004



MrQueasy posted:

You mean "Focus Stacking"?

idk? guess so, i donít know what that means

MrQueasy
Nov 15, 2005

Quit shakin' me, kid!

echinopsis posted:

idk? guess so, i donít know what that means

Taking a bunch of pictures of the same thing with different focus levels, and then using software to smush them together into an image with hypersharp focus at all DOF.

echinopsis
Apr 13, 2004



are they taken from the same location?

MrQueasy
Nov 15, 2005

Quit shakin' me, kid!

echinopsis posted:

are they taken from the same location?

Usually? there are some techniques for merging non-identical framing though... they always seem like more work than what I would get out of them.

Sagebrush
Feb 26, 2012

Well, actually...

echi is talking about aperture synthesis.

https://en.wikipedia.org/wiki/Aperture_synthesis

basically the concept there is: a telescope's resolution is limited by the size of its entry pupil (aperture, main mirror, etc). wouldn't it be great if we could build a telescope that was, like, a mile in diameter, and get insane resolution? well we can't. but mathematicians figured out that if we put a whole pile of small sensors in a field a mile across, we could take the low-resolution signals from each one and computationally combine them into a much higher resolution image. the Very Large Array, with its 27 radio telescopes as seen in Contact, is an implementation of this in the radio spectrum.

just putting two sensors a mile apart doesn't give you a good image, though. two sensors forms an interferometer, which is a useful tool by itself, but it won't produce a sharp spatial image. the more small sensors you can put into that mile wide field, the better the effective resolution you can achieve. theoretically, if you had an infinite number of small sensors in that field, each one capturing a single tiny low-res view of the sky, you could synthesize a high-resolution image that is exactly equivalent to building one mile-wide mirror, even though none of the sensors acquired data that sharp. in reality, you can't build infinite sensors either, so you put in as many as you can and get a resolution that is partway between small sensor and mile-wide mirror.

this concept of a million sensors all capturing the same subject from a slightly different angle is (almost*) exactly the plenoptic camera that fart simpson mentioned. you put a grid of micro-lenses over the sensor and capture thousands of tiny (say 10-pixel diameter) fisheye views of the scene, each one subtly different, and then process those into one image. so echi, yes, this already exists in camera technology, though it isn't common by any means. there was a company called Lytro that went out of business a few years ago that made plenoptic cameras. they were a neat trick but i don't think anyone ever found a use for them.

on a macroscopic scale, optical aperture synthesis is much less practical. people are doing it for astronomy, but nobody's going to be walking around with a pizza box covered in 10,000 camera sensors to simulate having a lens 18 inches across any time soon. there just isn't really a need for that resolution. it would be a pretty good optics/photonics phd project though i bet.

-------

focus stacking, on the other hand, is exactly what mrqueasy says it is: quickly take a bunch of images at different focus settings, then computationally stack the sharpest parts of each one to simulate a much larger depth of field than the lens allows. very useful in situations like macro photography. theoretically you could also arbitrarily change the point of focus, the depth of field, and the quality of the bokeh after the fact. kinda neat i guess?

Sagebrush fucked around with this message at 21:11 on Jan 5, 2022

Adbot
ADBOT LOVES YOU

echinopsis
Apr 13, 2004



ok thatís interesting. the use case I wonder about is lens on a the back of a phone.

and I always wondered how that lytro camera actually captured the info. very interesting

thanks for sharing


this stuff is interesting from a purely interest sake perspective. in real life iím more interested in taking better photos so that one day people think iím ok 😩

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply