I've thought about trying to do 3d scanning with a LIDAR module, but they all seem really expensive. Does anyone have a recommendation for a spinning LIDAR module that can be interfaced with by an arduino-style device, rather than USB, that doesn't cost me an arm and a kidney?
slamtec RPlidar points come in on UART. They are 2D, not 3D.
You won't be able to do much with the raw data on something with the compute power of an arduino. SLAM takes a lot of compute and memory and compute scales with resolution quickly.
A slight tangent but rollerblades is a case of proprietary eponym : Rollerblade is a brand of inline skates (often call skates - plural - for short) that became so famous people started to use it to describe all inline skates, no matter the brand. Just like vaccum cleaner and hoover :)
As another tangent, it's a great example of an activity that became very popular for a time and then almost completely faded away for no obvious reason. (If anything, paved rail trails--which are often a great place to inline skate--are much more common today than they were during skating's heyday.)
While it does feel like we are slowly approaching weird mix of "Snowcrash" and "Fringe", I can't help but marvel at how eerily beautiful those scans are. And the worst part is now I wanna try something similar. Is this what normal people call social proof?
It was at CVPR 2019, a computer vision conference. I may be biased since I used to work at Ouster, but cost notwithstanding, I would definitely pick the OS1 again for its unparalleled number of points per second combined with low weight and decent accuracy.
Very impressive! LiDAR and point clouds seem very promising, but the challenge of denoising point clouds and artifacts keep the skill bar very high/time intensive.
Actually a single camera is all you need. I think it’s fair to say that the only thing stereo gets you is scale. But both cameras and lidar have their place in sensing systems, and getting more experience with either is useful.
If you’re interested in reconstruction from images check out Meshroom and Nerf Studio
Scale is the one thing stereo doesn't get you compared with sequential mono images, unless you have some fancy lens model that lets you derive scale from nonlinearities in the lens. Is that something we do now? I always wanted to try out monocular SLAM with a fisheye lens.
With 2 mono images you can figure out that an object is twice as big as an other, but you can't tell the size of any objects (= you don't know the scale).
With a stereo image you know the distance between the lenses, which allows you to know the size of the objects (= you know the scale).
I've thought about trying to do 3d scanning with a LIDAR module, but they all seem really expensive. Does anyone have a recommendation for a spinning LIDAR module that can be interfaced with by an arduino-style device, rather than USB, that doesn't cost me an arm and a kidney?
slamtec RPlidar points come in on UART. They are 2D, not 3D.
You won't be able to do much with the raw data on something with the compute power of an arduino. SLAM takes a lot of compute and memory and compute scales with resolution quickly.
I've never actually tried them, but if you google "RPLIDAR", there seem to be some budget-friendly options out there.
A slight tangent but rollerblades is a case of proprietary eponym : Rollerblade is a brand of inline skates (often call skates - plural - for short) that became so famous people started to use it to describe all inline skates, no matter the brand. Just like vaccum cleaner and hoover :)
As another tangent, it's a great example of an activity that became very popular for a time and then almost completely faded away for no obvious reason. (If anything, paved rail trails--which are often a great place to inline skate--are much more common today than they were during skating's heyday.)
and xerox
and roomba
Thanks, I'll hoover up these examples for later use.
While it does feel like we are slowly approaching weird mix of "Snowcrash" and "Fringe", I can't help but marvel at how eerily beautiful those scans are. And the worst part is now I wanna try something similar. Is this what normal people call social proof?
I once put an Ouster OS1 on a hat and walked around with it. Pic of me here: [1]
[1] https://x.com/ddetone/status/1141785696224477184?s=46
Very cool. When was this? If you would repeat it, which LIDAR would you use? Is there anything on a generous hobby budget nowadays?
It was at CVPR 2019, a computer vision conference. I may be biased since I used to work at Ouster, but cost notwithstanding, I would definitely pick the OS1 again for its unparalleled number of points per second combined with low weight and decent accuracy.
On cheaper side there's MID360
You should post this on /r/Photogrammetry on Reddit: https://www.reddit.com/r/photogrammetry/
Very impressive! LiDAR and point clouds seem very promising, but the challenge of denoising point clouds and artifacts keep the skill bar very high/time intensive.
Wouldn't this be cheaper with a stereo pair of cameras + software reconstruction instead?
Actually a single camera is all you need. I think it’s fair to say that the only thing stereo gets you is scale. But both cameras and lidar have their place in sensing systems, and getting more experience with either is useful.
If you’re interested in reconstruction from images check out Meshroom and Nerf Studio
https://alicevision.org/
https://docs.nerf.studio/
Scale is the one thing stereo doesn't get you compared with sequential mono images, unless you have some fancy lens model that lets you derive scale from nonlinearities in the lens. Is that something we do now? I always wanted to try out monocular SLAM with a fisheye lens.
With 2 mono images you can figure out that an object is twice as big as an other, but you can't tell the size of any objects (= you don't know the scale).
With a stereo image you know the distance between the lenses, which allows you to know the size of the objects (= you know the scale).
That would need WAY more compute.
Also a lot less robust depending on baseline.
There are also advantages, such as that you now also have a map of RGB information corresponding to the depth map.
So cool! I wonder how the Lidar and ARCore poses were cross-calibrated?
Just to avoid this, I would just use a LiDAR equipped iPhone Pro, with industrial grade cross-calibration and still have all the visualization fun.
Just install polycam and walk around :)