photogrametry workflow
From my experience photoscan makes the most beautifull mesh BUT only with tack sharp flawless DLSR pic.
In our case (point and shoot 3d scanner) some pic won’t be sharp enought, some will be a little bit blurry some will have shadow, and in this case 123dCatch makes a better job (and a good one) at generating a mesh.
However photoscan always makes better texture, the hybrid workflow is there to adress this problem.
Using both software it is possible to get 123dCatch beautiful reconstruction from shitty pictures with photoscan flawless textures.
Photoscan workflow
1. generating mesh
Generate your mesh with photoscan, photo alignment with 40k point and as we learn no optimisation is better
Medium agressive cloud is best with our quality.
Highest possible mesh polycount
4096 texture
2. Wrap
Retopologize with R3DS Wrap, use a template mesh with a good uv map
make texture transfert
3. re-texture with photoscan
in photoscan tools>import mesh, mesh will flawlessly replace photoscan mesh
because orientation and scale match thanks to R3DS Wrap
then process>compute texture, use option « keep uv »
Voila, export mesh and enjoy
123dCatch workflow
1. generating mesh
Generate your mesh with 123D catch desktop application, highest possible details.
If the texture looks messed up, don’t worry it’s only a display glitch, export as obj
2. Wrap
Retopologize with R3DS Wrap, use a template mesh with a good uv map
make texture transfert
And done
CON : texture transfert not as beautiful than photoscan reprojection
Hybrid workflow
This is for when photoscan fail to make a correct mesh the idea is to generate a mesh with both photoscan an 123dcatch then acale, rotate and align 123d catch mesh to the photoscan one.
wrap it up, import in photoscan an do texture reprojection
how to fit the 123dCatch model in the photoscan one
the rest is the same that the photoscan workflow