opengl - float vs double on graphics hardware -


i've been trying find info on performance of using float vs double on graphics hardware. i've found plenty of info on float vs double on cpus, such info more scarce gpus.

i code opengl, if there's info specific api feel should known, let's have @ it.

i understand if program moving lot of data to/from graphics hardware, better use floats doubles require twice bandwidth. inquiries more towards how graphics hardware it's processing. understand it, modern intel cpus convert float/double 80-bit real calculations (sse instructions excluded) , both types equally fast. modern graphics cards such thing? float , double performance equal now? there strong reasons use 1 on other?

in terms of speed, gpus optimized floats. i'm more familiar nvidia hardware, in current generation hardware, there 1 dp fpu every 8 sp fpu. in next generation hardware, they're expected have more of 1 2 ratio instead.

my recommendation see if algorithm needs double precision. many algorithms don't need bits. run tests determine average error going single precision , figure out if it's significant. if not, use single.

if algorithm purely graphics, don't need double precision. if doing general purpose computation, consider using opencl or cuda.


Comments

Popular posts from this blog

c++ - Convert big endian to little endian when reading from a binary file -

C#: Application without a window or taskbar item (background app) that can still use Console.WriteLine() -

unicode - Are email addresses allowed to contain non-alphanumeric characters? -