"Časem také téměř jistě přijde i využití DDR4 čipů na grafických kartách, tak jako se to stalo s DDR3 (a jak bylo zvykem u DDR1, DDR1 či původních SDR DRAM)."
Toje co za hovadinu???
+1
0
-1
Je komentář přínosný?
"Časem také téměř jistě
a b https://diit.cz/profil/gpgpu
12. 9. 2013 - 11:11https://diit.cz/clanek/adata-odhalila-sve-ddr4-pameti/diskuse"Časem také téměř jistě přijde i využití DDR4 čipů na grafických kartách, tak jako se to stalo s DDR3 (a jak bylo zvykem u DDR1, DDR1 či původních SDR DRAM)."
Toje co za hovadinu???https://diit.cz/clanek/adata-odhalila-sve-ddr4-pameti/diskuse#comment-666990
+
Odkaz na ddr3 grafiku mu as nepomůže, stejně jako níže "Zaatharen" má spíš chaos v DDR/GDDR
+1
+3
-1
Je komentář přínosný?
Odkaz na ddr3 grafiku mu as
John Nagger https://diit.cz/profil/nagger
12. 9. 2013 - 12:43https://diit.cz/clanek/adata-odhalila-sve-ddr4-pameti/diskuseOdkaz na ddr3 grafiku mu as nepomůže, stejně jako níže "Zaatharen" má spíš chaos v DDR/GDDRhttps://diit.cz/clanek/adata-odhalila-sve-ddr4-pameti/diskuse#comment-667011
+
Pokud mě paměť neklame, tak poprvé DDR4 použila ATi na Radeonech HD2600 XT a HD2900. Dále DDR4ky mají některé HD3870ky, dvě mi leží na stole :)
+1
-4
-1
Je komentář přínosný?
Pokud mě paměť neklame, tak
Karáš Svorka https://diit.cz/autor/zaatharen
12. 9. 2013 - 11:36https://diit.cz/clanek/adata-odhalila-sve-ddr4-pameti/diskusePokud mě paměť neklame, tak poprvé DDR4 použila ATi na Radeonech HD2600 XT a HD2900. Dále DDR4ky mají některé HD3870ky, dvě mi leží na stole :)https://diit.cz/clanek/adata-odhalila-sve-ddr4-pameti/diskuse#comment-666997
+
to ale byly GDDR4 ne DDR4, pleteš hrušky s jablky :o)
+1
+3
-1
Je komentář přínosný?
to ale byly GDDR4 ne DDR4,
del42sa https://diit.cz/profil/del42sa
12. 9. 2013 - 11:42https://diit.cz/clanek/adata-odhalila-sve-ddr4-pameti/diskuseto ale byly GDDR4 ne DDR4, pleteš hrušky s jablky :o)https://diit.cz/clanek/adata-odhalila-sve-ddr4-pameti/diskuse#comment-666998
+
Co jsem se docetl, tak jsou vicemene stejne .... G znamena, ze je pro grafiku a ma zpravidla vyssi napeti a tudiz vic hreje.
+1
+2
-1
Je komentář přínosný?
Co jsem se docetl, tak jsou
BTJ https://diit.cz/profil/btj
13. 9. 2013 - 11:35https://diit.cz/clanek/adata-odhalila-sve-ddr4-pameti/diskuseCo jsem se docetl, tak jsou vicemene stejne .... G znamena, ze je pro grafiku a ma zpravidla vyssi napeti a tudiz vic hreje.https://diit.cz/clanek/adata-odhalila-sve-ddr4-pameti/diskuse#comment-667094
+
The principle differences are:
•DDR3 runs at a higher voltage that GDDR5 (typically 1.25-1.65V versus ~1V)
•DDR3 uses a 64-bit memory controller per channel ( so, 128-bit bus for dual channel, 256-bit for quad channel), whereas GDDR5 is paired with controllers of a nominal 32-bit (16 bit each for input and output), but whereas the CPU's memory contoller is 64-bit per channel, a GPU can utilise any number of 32-bit I/O's (at the cost of die size) depending upon application ( 2 for 64-bit bus, 4 for 128-bit, 6 for 192-bit, 8 for 256-bit, 12 for 384-bit etc...). The GDDR5 setup also allows for doubling or asymetric memory configurations. Normally (using this generation of cards as example) GDDR5 memory uses 2Gbit memory chips for each 32-bit I/O (I.e for a 256-bit bus/2GB card: 8 x 32-bit I/O each connected by a circuit to a 2Gbit IC = 8 x 2Gbit = 16Gbit = 2GB), but GDDR5 can also operate in what is known as clamshell mode, where the 32-bit I/O instead of being connected to one IC is split between two (one on each side of the PCB) allowing for a doubling up of memory capacity. Mixing the arrangement of 32-bit memory controllers, memory IC density, and memory circuit splitting allows of asymetric configurations ( 192-bit, 2GB VRAM for example)
•Physically, a GDDR5 controller/IC doubles the I/O of DDR3 - With DDR, I/O handles an input (written to memory), or output (read from memory) but not both on the same cycle. GDDR handles input and output on the same cycle.
The memory is also fundamentally set up specifically for the application it uses:
System memory (DDR3) benefits from low latency (tight timings) at the expense of bandwidth, GDDR5's case is the opposite. Timings for GDDR5 would seems unbelieveably slow in relation to DDR3, but the speed of VRAM is blazing fast in comparison with desktop RAM- this has resulted from the relative workloads that a CPU and GPU undertake. Latency isn't much of an issue with GPU's since their parallel nature allows them to move to other calculation when latency cycles cause a stall in the current workload/thread. The performance of a graphics card for instance is greatly affected (as a percentage) by altering the internal bandwidth, yet altering the external bandwidth (the PCI-Express bus, say lowering from x16 to x8 or x4 lanes) has a minimal effect. This is because there is a great deal of I/O (textures for examples) that get swapped in and out of VRAM continuously- the nature of a GPU is many parallel computations, whereas a CPU computes in a basically linear way.
+1
-3
-1
Je komentář přínosný?
to opravdu nejsou:
The
del42sa https://diit.cz/profil/del42sa
13. 9. 2013 - 21:45https://diit.cz/clanek/adata-odhalila-sve-ddr4-pameti/diskuseto opravdu nejsou:
The principle differences are:
•DDR3 runs at a higher voltage that GDDR5 (typically 1.25-1.65V versus ~1V)
•DDR3 uses a 64-bit memory controller per channel ( so, 128-bit bus for dual channel, 256-bit for quad channel), whereas GDDR5 is paired with controllers of a nominal 32-bit (16 bit each for input and output), but whereas the CPU's memory contoller is 64-bit per channel, a GPU can utilise any number of 32-bit I/O's (at the cost of die size) depending upon application ( 2 for 64-bit bus, 4 for 128-bit, 6 for 192-bit, 8 for 256-bit, 12 for 384-bit etc...). The GDDR5 setup also allows for doubling or asymetric memory configurations. Normally (using this generation of cards as example) GDDR5 memory uses 2Gbit memory chips for each 32-bit I/O (I.e for a 256-bit bus/2GB card: 8 x 32-bit I/O each connected by a circuit to a 2Gbit IC = 8 x 2Gbit = 16Gbit = 2GB), but GDDR5 can also operate in what is known as clamshell mode, where the 32-bit I/O instead of being connected to one IC is split between two (one on each side of the PCB) allowing for a doubling up of memory capacity. Mixing the arrangement of 32-bit memory controllers, memory IC density, and memory circuit splitting allows of asymetric configurations ( 192-bit, 2GB VRAM for example)
•Physically, a GDDR5 controller/IC doubles the I/O of DDR3 - With DDR, I/O handles an input (written to memory), or output (read from memory) but not both on the same cycle. GDDR handles input and output on the same cycle.
The memory is also fundamentally set up specifically for the application it uses:
System memory (DDR3) benefits from low latency (tight timings) at the expense of bandwidth, GDDR5's case is the opposite. Timings for GDDR5 would seems unbelieveably slow in relation to DDR3, but the speed of VRAM is blazing fast in comparison with desktop RAM- this has resulted from the relative workloads that a CPU and GPU undertake. Latency isn't much of an issue with GPU's since their parallel nature allows them to move to other calculation when latency cycles cause a stall in the current workload/thread. The performance of a graphics card for instance is greatly affected (as a percentage) by altering the internal bandwidth, yet altering the external bandwidth (the PCI-Express bus, say lowering from x16 to x8 or x4 lanes) has a minimal effect. This is because there is a great deal of I/O (textures for examples) that get swapped in and out of VRAM continuously- the nature of a GPU is many parallel computations, whereas a CPU computes in a basically linear way.https://diit.cz/clanek/adata-odhalila-sve-ddr4-pameti/diskuse#comment-667183
+
"Časem také téměř jistě přijde i využití DDR4 čipů na grafických kartách, tak jako se to stalo s DDR3 (a jak bylo zvykem u DDR1, DDR1 či původních SDR DRAM)."
Toje co za hovadinu???
http://eu.msi.com/product/vga/N630GT-MD2GD3.html#/?div=Specification
Odkaz na ddr3 grafiku mu as nepomůže, stejně jako níže "Zaatharen" má spíš chaos v DDR/GDDR
Pokud mě paměť neklame, tak poprvé DDR4 použila ATi na Radeonech HD2600 XT a HD2900. Dále DDR4ky mají některé HD3870ky, dvě mi leží na stole :)
to ale byly GDDR4 ne DDR4, pleteš hrušky s jablky :o)
Co jsem se docetl, tak jsou vicemene stejne .... G znamena, ze je pro grafiku a ma zpravidla vyssi napeti a tudiz vic hreje.
to opravdu nejsou:
The principle differences are:
•DDR3 runs at a higher voltage that GDDR5 (typically 1.25-1.65V versus ~1V)
•DDR3 uses a 64-bit memory controller per channel ( so, 128-bit bus for dual channel, 256-bit for quad channel), whereas GDDR5 is paired with controllers of a nominal 32-bit (16 bit each for input and output), but whereas the CPU's memory contoller is 64-bit per channel, a GPU can utilise any number of 32-bit I/O's (at the cost of die size) depending upon application ( 2 for 64-bit bus, 4 for 128-bit, 6 for 192-bit, 8 for 256-bit, 12 for 384-bit etc...). The GDDR5 setup also allows for doubling or asymetric memory configurations. Normally (using this generation of cards as example) GDDR5 memory uses 2Gbit memory chips for each 32-bit I/O (I.e for a 256-bit bus/2GB card: 8 x 32-bit I/O each connected by a circuit to a 2Gbit IC = 8 x 2Gbit = 16Gbit = 2GB), but GDDR5 can also operate in what is known as clamshell mode, where the 32-bit I/O instead of being connected to one IC is split between two (one on each side of the PCB) allowing for a doubling up of memory capacity. Mixing the arrangement of 32-bit memory controllers, memory IC density, and memory circuit splitting allows of asymetric configurations ( 192-bit, 2GB VRAM for example)
•Physically, a GDDR5 controller/IC doubles the I/O of DDR3 - With DDR, I/O handles an input (written to memory), or output (read from memory) but not both on the same cycle. GDDR handles input and output on the same cycle.
The memory is also fundamentally set up specifically for the application it uses:
System memory (DDR3) benefits from low latency (tight timings) at the expense of bandwidth, GDDR5's case is the opposite. Timings for GDDR5 would seems unbelieveably slow in relation to DDR3, but the speed of VRAM is blazing fast in comparison with desktop RAM- this has resulted from the relative workloads that a CPU and GPU undertake. Latency isn't much of an issue with GPU's since their parallel nature allows them to move to other calculation when latency cycles cause a stall in the current workload/thread. The performance of a graphics card for instance is greatly affected (as a percentage) by altering the internal bandwidth, yet altering the external bandwidth (the PCI-Express bus, say lowering from x16 to x8 or x4 lanes) has a minimal effect. This is because there is a great deal of I/O (textures for examples) that get swapped in and out of VRAM continuously- the nature of a GPU is many parallel computations, whereas a CPU computes in a basically linear way.
Pro psaní komentářů se, prosím, přihlaste nebo registrujte.