3d - Algorithm to interpolate any view from individual view mapped on a sphere -


i'm trying create graphics engine show point cloud data (in first person now). idea precalculate individual views different points in space viewing , mapping them sphere. possible interpolate data determine view point on space?

i apologise english , poor explanation, i'm can't figure out way explain. if don't understand question i'll happy reformulate if it's needed.

edit:

i'll try explain example

image 1: first view point

image 2:second viewpoint

in these images can see 2 different views of pumpkin (imagine have sphere map of 360 view in both cases). in first case have far view of pumpkin , can see surroundings of , imagine have chest right behind character (we'd have detailed view of chest if looked behind).

so, first view: surroundings , low detail image of pumpkin , detail of chest without surroundings.

in second view have exact opposite: detailed view of pumpkin , non detailed general view of chest (still behind us).

the idea combine data both views calculate every view between them. going towars pumpin mean streach points of first image , fill gaps second 1 (forget other elements, pumpkin). @ same time, comprime image of chest , fill surroundings data general view of second one.

what have algorithm dictates streching, compriming , comination of pixels (not forward , backwards, diagonaly, using more 2 sphere maps). know it's fearly complicated, hope expressed myself enough time.

edit:

(i'm using lot word view , think that's part of problem, here definition of mean "view": "a matrix of colored points, each point corresponds pixel on screen. screen displays part of matrix each time (the matrix 360 sphere , display fraction of sphere). view matrix of possible points can see rotating camera without moving it's position." )

okay, seems people still don't understand concept around it. idea able display detailed enviroments possible "precoocking" maximun amount of data before displaying @ real time. i'll deal preprocesing , compression of data now, i'm not asking that. "precoocked" model store 360 view @ each point on space displayed (if character moving at, example, 50 points per frame, store view each 50 points, thing precalculate lighting , shading , filter points wont seen, not processed nothing). basicaly calculate every possible screenshot (on totally static enviroment). of course, that's ridiculous, if compress lot data still much.

the alternative store strategic views, less frecuently. of points repeated in each frame if store possible ones. change in position of points on screen mathematically regular. i'm asking that, algorithm determine position of each point on view based on fiew strategic viewpoints. how use , combinate data strategic views on different possitions calculate view in place.


Comments

Popular posts from this blog

linux - xterm copying to CLIPBOARD using copy-selection causes automatic updating of CLIPBOARD upon mouse selection -

qt - Errors in generated MOC files for QT5 from cmake -