Home Artists Posts Import Register

Content

Mostly just talking photoscanning here, but hopefully useful for folks who haven't played around with it much! Super useful for making quick textural stuff.

I'm using Reality Capture (much more affordable as of a week ago- a typical scan can be like 20-30 cents), but if you want a full free open source option, Meshroom is also a great option.

One of my favorite parts of using photoscanned bits is that I get shapes I never would have modeled on my own. When I'm modeling in a void, it's easy to just make the easiest shape (a cube or a cylinder), but when engineers have to fit all those bits under a car engine, they make some pretty wacky shapes, and it's fun to be able to incorporate those into my own designs.

Here's the video talking about how to add dust. It's particularly useful for making a bunch of photoscans "gel".
https://www.patreon.com/posts/quickly-add-dust-42512972

And if you want to mess around with them (or use them in your own projects), here are the the scans I'm using in the video
RIGHT CLICK, SAVE LINK:
Blend File:
http://robotsoup.com/patreon/Photoscanned_Engine_Asset.blend
FBX (not as great of textures):
http://robotsoup.com/patreon/Photoscanned_Engine_Assets_FBX.zip



Files

Photoscanning Greebles from Real Life

Comments

Anonymous

Awesome, thanks!! I tried RealityCapture again after I heard it was now under Epic ... a model that would've previously cost $6 was now like 60c! So good

IanHubert

Yeah!!! I was thinking it was like 10% of what it was before. So much of my workflow previous was about trying to limit the amount of data that I fed into it, haha (although I immediately tried dumping a TON of data into it (like, 1000 4k frames), and it said it was going to take like 10 hours to process, so I guess there's a balance :P

Anonymous

So there isn't any photoscan app alternative for Android devices yet? :( btw I can't believe how you came with these ideas for modeling is just mind blowing!! , thanks for sharing this information! :D

IanHubert

Not that I've found, but I'm also not 100% sure. There used to be Display.Land (which was cool, but not really practical for any sort of production situation), but it's been discontinued, and I'm not sure if anything's moved in to take its place. IMO, the "We want this to give fast results on a phone!" tradeoff seems to make most things just slightly too blobby to be used practically (though the next generation of the tech might be amazing- wouldn't be too surprised of Android leapfrogged and came out with some super cool stuff at some point. Still probably require buying a new friggin' phone, though :/ ) And yeah!! I hope it's at all useful! :D

Kai Christensen

If you're lazy like me you can just use and then apply the Smooth modifier instead of actually painting with the smooth sculpt brush :) Also smooth shading w/ auto-normals sometimes makes my scans less chunky. Thanks for the amazing video Ian! Photoscanning will never *not* be a super satisfying process IMO. There's something inherently pleasing about watching your computer just crunch through a bunch of pictures and then magically make something cool out of them.

Anonymous

Ian you are awesome! I’m laid up in bed after my second vaccine and this is a Godsend! Thank you!

IanHubert

Oooo- I'll have to check out the smooth modifier! I'd assumed a global smooth would make everything TOO smooth- but maybe it'd be just enough to take the noise off- that'd be great! Thanks! :D And man yeah! As excited as I am for the next generation of technology to make the scans even CLEANER and all that, there's something about the process of it, and never knowing what you'll get, that makes it weirdly enjoyable, haha

Kai Christensen

Honestly it's frustrating because (in regards to HQ consumer LIDAR photoscanning) it *feels* like all the pieces are here, it's just nobody has put them together yet into a reasonable normal-person-friendly format. Like, part of me wants to start shopping for off-the-shelf lidar components now and research if it's even remotely feasible to build anything useful at a DIY level.

IanHubert

Woooo! Also- congrats on the vaccine! Hope you're not feeling too bad! Kaitlin just got her second one last week and only had a sore arm, but I've also had some friends get hit SUPER hard, so here's hoping yours is alright!

Anonymous

THERE IS A CLONE TOOL?! 🤯

Anonymous

And now THAT has made my day! Thanks Ian and Kai for the kind words!

Anonymous

I LOVE TAKING ROBOTS TO GRANDMA'S HOUSE TO GET HER SPECIAL HOMEMADE JAM. Thanks so much, Ian!

Anonymous

now if only reality capture worked on mac *sighhhhhhhhhhhhhhhhhhhhhhhhh

Anonymous

Any benefit to exporting the frames from blender and not just importing the video directly into RC?

Anonymous

DUDE! Reality Capture can do video now! It will take out the frames for you!

Jan van den Hemel

This is exactly what I need right now, I’ve been doing a lot with Realitycapture recently. I get the 3-month subscription on Steam, it’s the cheapest method I think (as long as you use it enough, so you get your money’s worth I guess!). The main difficulty is finding cool stuff to scan.

Anonymous

One thing I've started doing recently is breaking up my scan process into two steps—I do the scan and get a mesh, then take it into Blender and smooth out the bumpies and fill holes and stuff, then send it back to the scan software to generate the texture. Avoids the worst of those texture zigzags you get when you start really pushing around the mesh, and it means stuff I'd otherwise have to clone (like a hole) gets textured with its real original self instead! Might be excessive, but I feel like it barely adds any time, since it's all work I'd be doing anyway, just with one extra brief OBJ export in the middle. Although, from what I've seen, I'm starting to think Metashape is more prone to holes and dents than other programs, so maybe it was out of necessity...

IanHubert

Oh hey whoa!!!! That's a cool process, though. I wonder if you can re-import models back into RealityCapture? Regardless, I wonder if baking the texture from the original OBJ onto the smoothed/fixed OBJ would give a similar result? I'll give it a shot! I've never really messed with baking, but it seems really useful for this stuff.

IanHubert

It comes in a subscription? That's good to know! And also oh man yeah- I keep walking around my town like, "I've already scanned everything!" I keep going on little fieldtrips to nearby towns to see what they have (Actually! Found out you can tour a battleship across the bay, so I zipped over there last week. LOT of good stuff I could snag there! I'm stoked :D)

Anonymous

Love this... always cheers me up when these pop into my inbox! Cheers Ian!

Anonymous

Great! Thanks for the video and especially the iPhone 12 part. I bought 11 when 12 came out because my "scientist" ass tried to save a few hundred, but then when I already had the phone I learned that 12 has LIDAR sensors on the back as well, and so I have been feeling like I fucked up with that from that time on.. But yeah as I see in this video, that Reality Capture does better job and using actual computer hardware to get much better results.. So I guess the iPhone thingy is more an commercial trick/gimmic than actual tool, to use. And that made me feel good again! So thanks for that!

IanHubert

Ah yeah! And I'm sure there's a place for the tech (I plan on scanning lots of random environmental stuff while I'm out and about (could be GREAT for grabbing parking lot and road textures, which are otherwise weirdly hard to get)), from a scanning perspective, I just haven't seen anything that works nearly as accurately as the desktop based solutions just yet. Although! Another factor I didn't mention is time; the iphone scans took a couple minutes, whereas the entire RealityCapture workflow took the better part of an hour.

Anonymous

You can see why the youtube lazy tutorials don't get too many updates, the effort and quality of these Patreon videos means he doesn't have the time!

Anonymous

I dont know if this would work with any of the software you use, but Intel has been producing standalone depth cameras, including a lidar product. https://www.intelrealsense.com/lidar-camera-l515/ I follow them because i've been interested in it for a while now.

Anonymous

I tried a new method of creating a wall with randomized bricks, and wanted to bool a window in there. The Bool Tool just messed everything up so i gave up. Now i see that i maybe could've saved it by going to the Fast mode in the bool tool! Thats a relief for when im up to trying that again!

Anonymous

Quick question: I see shadows when you place it in HDRI worlds. How do you make that shadow-catcher thing?

Anonymous

Unfortunately I have a Radeon 5700xt and the photo scan apps all seems to require nvidia for that sweet cuda goodness :( I can get to a pointcloud but not to a mesh. BUT I used the app trnio on iphone ($7 bucks) and got some good results - their scanning did not work that well, but importing my own pictures actually worked very well. Here's a mushroomy stump https://sketchfab.com/3d-models/stump-with-mushrooms-01a129eaa5d140a5a7fb86022b7dce51

Anonymous

I'm on a Radeon and Agisoft Metashape works fine, might be worth checking out! A little pricier, I guess, but you own it forever, so as soon as you do your 80th or 90th scan it starts getting cheaper than the pay-as-you-go options

Anonymous

First you make a plane and place it under the object in question. Then you go to Object Properties - Visibility - Mask - Shadow Catcher. Just check the box and that's it!

Anonymous

Also, CG Matter made a tutorial on how to get the HDRI projected on the ground: https://www.youtube.com/watch?v=RsAUfQlZH_w

IanHubert

Ooo- here's hoping! Yeah my guess is the new "precise" solution works really well with simpler objects, but maybe not when you dump half a million polygons into it.

IanHubert

Oh interesting! Thanks for sharing that! Yeah if there were a solid standalone tool for it, that could be cool. Most everything I've seen so far has been pretty expensive because it's for high-end industrial applications.

Anonymous

You can also use the "step" in the Properties -> Output -> Dimensions panel to render only every n-frames for the photo scan software.

Anonymous

you can run meshroom on google colab for free using their CUDA setups. https://github.com/alicevision/meshroom/wiki/Meshroom-in-Google-Colab-(cloud)

Anonymous

@John Cliff - whoa that's crazy. I've used collab for some deep dream image generating stuff already. I'll have to check that out.

IanHubert

AHHH!! That's a great idea! Someone was even saying that in RealityCapture at least, you can just load the video straight in, and pick how many frames you want to use right there.

Anonymous

ian can u please make a video on how to composite green screen to background footage ,and more motion tracking tutorials

Anonymous

Amazing man, thanks for sharing ! On a completely unrelated topic, just watched Disney's Atlantis again tonight and you sound exactly like Milo Thatcher character it's crazy haha

Anonymous

Re: "the easiest way to do this is to put it in a video editor and speed it up" - you might be amazed how much faster it can be with the ffmpeg utility and a quick script. I know you are generally not super keen to spend time on anything that looks like programming but it's a one-liner and you can make it so you can literally drag and drop your video file onto the script and it vomits all the images and whatever frame step you want as either pngs or jpegs into a directory nearby, no opening and clicking etc needed 💖

Anonymous

Great tutorial thanks Ian! Question, how do you project your objects on the floor of you HDRIs? In my case objects always seem to be floating on the HDRI

Anonymous

Try the first example here, that extracts every 10th frame: https://superuser.com/a/1274696

Anonymous

He's got the plane underneath it set to Shadow Catcher, so the objects seem to be casting shadows on the HDRI background, but you can see they're still floating—the shadow just helps ground them from any particular angle. There are all sorts of fun ways to project your HDRIs to get a ground plane, though—check out this node setup from Luca Rood (https://twitter.com/LucaRood/status/1291803450502713344) or this tutorial from CGMatter (https://youtu.be/RsAUfQlZH_w)

Anonymous

You could also rebuild the geometry of the HDRI (put the camera in the center of the scene, add the Environment Texture to some geometry, slide things around until it lines up), and you get a fully textured, HDR environment to light your objects, complete with floors, walls, pillars, etc (well, anything visible in the original image—anything occluded won't be there obviously)—and you could move objects around within that space and get correct moving lighting and reflections. Here's a video of someone doing that: https://youtu.be/nnfD85cRxbE

Anonymous

Thanks for all the info Carter! I appreciate it!

Anonymous

This is great! Thanks for this! I love how simple photo scans can add so much more depth and hidden detail

Anonymous

Handy tool! https://blendermarket.com/products/hdri-maker

Anonymous

I love this workflow so much! Need to try it out soon, hopefully my laptop can handle the load with instancing :D

Anonymous

Just used Reality Capture to scan in something for a client: I found taking a bunch of photos worked better than video, and it only cost $0.27 for the scan. Just a note though, you have to buy $10 of "points" at a time, so, be forewarned.