Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Pci Physics Cards

Options
  • 24-06-2006 9:23pm
    #1
    Moderators, Society & Culture Moderators Posts: 3,735 Mod ✭✭✭✭


    Read about it in Augusts Custom pc basically its a pci card thats takes some of the physics strain off your gpu over all it improves performence in some games at the moment like Ghost Recon with future titles following ut 2007 etc

    Worth it or just another gimmick?

    More info ---> http://www.ageia.com/


«1

Comments

  • Closed Accounts Posts: 13,874 ✭✭✭✭PogMoThoin


    general thoughts on other forums are that they not great. dont take any pressure off gpu and dont add much to the games. wasted money. i'd like 2 hear from someone who has 1.


  • Registered Users Posts: 3,141 ✭✭✭masteroftherealm


    Nope useless FOR THE MOMENT, will be almost mandatory eventually. Games arent coded for them yet so..


  • Closed Accounts Posts: 29,930 ✭✭✭✭TerrorFirmer


    Its not much good in current games but it should be utilised alot better in to-be-released games.....apparently in Ghost Recon it actually degraded performance on some machines despite the whole point being otherwise...


  • Closed Accounts Posts: 14,983 ✭✭✭✭tuxy


    i never knew physics was handled by the gpu, i always figured it was the cpu? :confused:


  • Closed Accounts Posts: 13,874 ✭✭✭✭PogMoThoin


    will they work in non sli mobos? when will games be built for them. GRAW the only 1 so far.


  • Advertisement
  • Registered Users Posts: 852 ✭✭✭blackdog2


    a few games are enabled for it in the near future.its getting more and more attractive, even though its proven to decrease your fps!!!:eek:


  • Registered Users Posts: 13,995 ✭✭✭✭Cuddlesworth


    Tbh honest at the moment its Cpu--->Gpu. Whit the pysics card its Cpu--->Ppu--->Gpu, and this creates a large lag in performance. Untill I see some pci-ex cards coming out there is no chance im getting one.


  • Closed Accounts Posts: 12,401 ✭✭✭✭Anti


    krazy_8s wrote:
    Tbh honest at the moment its Cpu--->Gpu. Whit the pysics card its Cpu--->Ppu--->Gpu, and this creates a large lag in performance. Untill I see some pci-ex cards coming out there is no chance im getting one.


    true,and a very valid point


  • Registered Users Posts: 8,405 ✭✭✭gizmo


    Havn't we talked about this already or is this sense of déja vu I'm getting from a different forum? Anyway the idea of idea of having a dedicated Physics co-processor does have potential but to be honest I don't see it in its current form, mainly due to the fact that its yet another add-in card for a very specific area of gaming that many people will not be able to justify spending €250 on.

    With regards to its performance in current games, yes people have been finding performance drops in its current implementation but that is becuase when the games are being run in PhysX enabled mode more in game content is also enabled so the GPU/CPU is being strained more. Outside Cell Factor we have yet to see a straight side by side example of a game being run with and without the card.

    Also having them come out on PCI instead of PCI-E is still baffling me. I *think* the idea was that it would be able to go into more mobos but to be honest, if someone isn't going to have a PCI-E enabled mobo then the chances of them spending that kind of money on a dedicated physics card is fairly slim. Aegia have said there'll be PCI-E versions soon so might be best to wait and see what happens there...


  • Closed Accounts Posts: 9,538 ✭✭✭btkm8unsl0w5r4


    The fact is very few Mobo's have three PCI-e slots....so would you rather get a PPU or another GPU for SLI or crossfire...its a no brainer....the serious gamer will already have two GPU's on the PCI-e bus and with heatsinks and fans these leave little room for another add on card. I have SLI and an X-Fi and that leave no free slots on a Asus A8N32 SLI Delux....so either ditch the sound card or live without a PPU. Personally I think that physics look great...the cell factor demo is really cool however I will hold out ( and I am a early adopter like a fool) untill UT2007 come out....or if Kavok FX od DX10 come out.


  • Advertisement
  • Registered Users Posts: 16,712 ✭✭✭✭astrofool


    Your A8N32-SLI Deluxe has a PCI-E slot above the GPU slots which no GPU or sound cards can block?

    Almost all motherboards now have a spare PCI-E slot, or 2 or 3.


  • Moderators, Society & Culture Moderators Posts: 9,689 Mod ✭✭✭✭stevenmu


    There's no reason to believe there'd be any noticeable difference between a PCI and a PCI-e version.

    Havn't we talked about this already or is this sense of déja vu I'm getting from a different forum?
    Yes the whole internet seems to be awash with hysterical FUD over these cards, it's amazing the amount of people who buy or not buy them based on absolutely no understanding of what they actually do.

    They do not 'increase' or 'decrease' FPS, in Ghost Recon when a card is present, much more debris gets rendered in explosions. More debris being rendered naturally leads to lower FPS, this is a game design choice, not a direct consequence of having a PPU installed. If there was no PPU present and the game used the same detail levels, then the GPU would render the scene at the exact same speed it would with the PPU present, but the CPU would be responsible for calculating the trajectories of all the debris which could lead to a lot of slowdown.

    PPUs were designed for one thing, and one thing only, physics calculations. Not increasing FPS, or making graphics better, just physics calculations. Currently these are done on the CPU (not the GPU), which isn't really suited to the task. Currently the CPU has to decide where all objects in a scene should be placed, and part of deciding where an object should be placed is using physics calculations to determine it's speed and trajectory and how they are affected by gravity, other forces, collisions with other objects etc. Once the CPU decides all this it sends all the info to the GPU to render the scene to your screen. It's fairly easy to see that if there's a lot of objects in a scene which need calculations done on them, the GPU could be left idle waiting for the CPU to tell it where to draw everything. The idea with PPUs is that by having a seperate processor dedicated to these calculation, you're not only taking a huge load off the CPU, but because it can handle these types of calculation much more efficiently, you can have many many more objects and calculations done on them in the same amount of time, leading to more realistic and interactive enviroments. Naturally having more detail and objects to render does lead to slower FPS because there's that much more for your GPU to render, but that's always the situation when you increase the amount of detail/objects.


  • Registered Users Posts: 13,995 ✭✭✭✭Cuddlesworth


    stevenmu wrote:
    There's no reason to believe there'd be any noticeable difference between a PCI and a PCI-e version.

    I may be wrong but I think there will be a big difference between the 2, due to the pci-express bus having a direct point to point connect to the cpu rather than sharing the same same unidirectional 32-bit parallel bus.

    More so on Amd systems with a hypertransport connection straight to pci-express.

    This would cause the slow-down/lag that games like Cell factor and ghost recon have been experiencing to be shorted due to the lower response times and latency in the way that the ppu communicates to the cpu.


  • Closed Accounts Posts: 9,538 ✭✭✭btkm8unsl0w5r4


    astrofool wrote:
    Your A8N32-SLI Deluxe has a PCI-E slot above the GPU slots which no GPU or sound cards can block?

    Almost all motherboards now have a spare PCI-E slot, or 2 or 3.
    Eh no....the 7900gtx has a dual slot cooler so both PCI-e slots cover one PCI slot....the remaining pci slot between the GPU's is filled with a sound card.....so I am stuck......unless I loose a GPU or the X-Fi

    http://tweakers.net/ext/i.dsp/1135336455.jpg


  • Moderators, Society & Culture Moderators Posts: 9,689 Mod ✭✭✭✭stevenmu


    krazy_8s wrote:
    I may be wrong but I think there will be a big difference between the 2, due to the pci-express bus having a direct point to point connect to the cpu rather than sharing the same same unidirectional 32-bit parallel bus.

    More so on Amd systems with a hypertransport connection straight to pci-express.

    This would cause the slow-down/lag that games like Cell factor and ghost recon have been experiencing to be shorted due to the lower response times and latency in the way that the ppu communicates to the cpu.
    The GRAW developers are on the record as saying that the drop in frame rates is purely due to the extra detail being rendered, which makes perfect sense.

    I'm just guessing now, but when you think about it, compared to a video card, there's relatively little data being sent back and forward. All that gets transferred is the mesh data (all the points in the object) and what instructions to carry out on them. There's no need to send any of the bandwidth intensive stuff like textures, bump/light maps etc. The amount of data being sent back and forward is miniscule compared to what video cards need, and even they don't need PCI-e bandwidth (except maybe for communication between cards in SLI/Crossfire).


  • Registered Users Posts: 13,995 ✭✭✭✭Cuddlesworth


    I dont trust what the Developers of GRAW say, because they are a non-biased source. There was a exspected drop in Fps, but never to the degree that is being shown in current systems (25% drop I think). The extra rendering being done by the gfx is not enough to account for the loss in Fps in GRAW. I cant say this will be the case in any other games because I havn't seen or heard of anything upcoming beside 2k7 thats worth talking about.

    Agreed they dont need the bandwidth, but they do need the extra latency provided by the architecture of the pci-ex bus. In the opinions of some respected members of the overclocking community the ppu would be best served with a direct link straight to the cpu, best done on the mainboard pcb somewhere close to the socket.

    Although the ppu is a good idea, it still puts more pressure on the cpu which has already become the bottleneck in high-end rigs. Adding to that the limitation of the gpu trying to render with the ppu running at full capicity you would have a un-playable game.


  • Registered Users Posts: 8,405 ✭✭✭gizmo


    krazy_8s wrote:
    I dont trust what the Developers of GRAW say, because they are a non-biased source. There was a exspected drop in Fps, but never to the degree that is being shown in current systems (25% drop I think). The extra rendering being done by the gfx is not enough to account for the loss in Fps in GRAW.
    Yep, it is. Have you played through the game with a PPU in the system or even seen the screenshots? Theres a good bit more content on screen to be honest. Its really a combination of the this extra content, the PPU->CPU->GPU and the developers inexperience with working with the Aegia API. :)


  • Registered Users Posts: 13,995 ✭✭✭✭Cuddlesworth


    Im sorry but I have to disagree with that, have seen Vidoes of Graw with and witout a Ppu and I cant see how a top end graphics card cant deal with rendering some 1 poli black smudges moving across the screen. Its simply not nearly enough pressure on the gfx to account for such a massive drop in fps.

    And tbh, most of the "pysics" effects in Graw would have been better handled by a physics simulation on a gpu. The agia should only be used to make a game that will either run with it or not run without it.


  • Registered Users Posts: 8,405 ✭✭✭gizmo


    Well thats where developers inexperience with the API comes in. Its not just a matter of turning the PPU "on", there is significant amounts of code involved on top of the usual engine code that will control this behaviour. That is, assuming it was integrated properly...

    By the way, I don't think it was a 25% drop in frame rates, I thought it was only like 10fps?


  • Registered Users Posts: 16,712 ✭✭✭✭astrofool


    Eh no....the 7900gtx has a dual slot cooler so both PCI-e slots cover one PCI slot....the remaining pci slot between the GPU's is filled with a sound card.....so I am stuck......unless I loose a GPU or the X-Fi

    http://tweakers.net/ext/i.dsp/1135336455.jpg

    You seem to be getting confused between PCI and PCI-E. You have one spare PCI-E slot in your current configuration, its above the first PCI-E x16 slot. The argument here is that they should release the PPU in PCI-E format, which would be perfect for you, as you could use it and your PCI sound card and SLI config, all at the same time.


  • Advertisement
  • Closed Accounts Posts: 9,538 ✭✭✭btkm8unsl0w5r4


    i am sorry I am not getting it.....there are 6 slots on the board PCI...PCI...PCI-e...PCI....PCI-e....and a modem riser....no matter which slot PCI or PCI-e the PPU is with a dual slot cooling system I dont see how I can have 2 x GPU 1 x X-FI and a PPU....please expalin it to me...I hope I am wrong. Thanks
    fitzgeme


  • Registered Users Posts: 15,815 ✭✭✭✭po0k


    krazy_8s wrote:
    Tbh honest at the moment its Cpu--->Gpu. Whit the pysics card its Cpu--->Ppu--->Gpu, and this creates a large lag in performance. Untill I see some pci-ex cards coming out there is no chance im getting one.
    or HTX wink.gif


  • Registered Users Posts: 13,995 ✭✭✭✭Cuddlesworth


    SyxPak wrote:
    or HTX wink.gif

    You would have assumed that by now, both the gpu and ppu would have a HTX connection but I feel that Intel have held this technology back.


  • Registered Users Posts: 16,712 ✭✭✭✭astrofool


    i am sorry I am not getting it.....there are 6 slots on the board PCI...PCI...PCI-e...PCI....PCI-e....and a modem riser....no matter which slot PCI or PCI-e the PPU is with a dual slot cooling system I dont see how I can have 2 x GPU 1 x X-FI and a PPU....please expalin it to me...I hope I am wrong. Thanks
    Hector White Macaroni

    That "modem riser" is a PCI-E x4 slot, and could fit any potential PCI-E physics card (it can take PCI-E x1 cards as well). Check page 2-3 and 2-16 of your manual.


  • Registered Users Posts: 6,630 ✭✭✭gline


    astrofool wrote:
    That "modem riser" is a PCI-E x4 slot, and could fit any potential PCI-E physics card (it can take PCI-E x1 cards as well). Check page 2-3 and 2-16 of your manual.

    ;)

    As for the ageia cards... personally at the moment i think they are a waste of money

    wow 1 or 2 games support it, that is no indication on wether this card will be developed for

    also ati's way of using an extra gfx for ppu is a bit mad... 3 graphics cards bunched tightly together :rolleyes: noisey and hot

    if they brought out a ppu card that speeds up ANY game by helping with the physics calculations alongside the cpu then that would be amazing and well worth the money

    but eventually we will probably all have to have a ppu card to play the latest games.... but it is not time to get one yet ;)


  • Registered Users Posts: 8,405 ✭✭✭gizmo


    gline wrote:
    also ati's way of using an extra gfx for ppu is a bit mad... 3 graphics cards bunched tightly together :rolleyes: noisey and hot
    Try expensive? :D
    gline wrote:
    if they brought out a ppu card that speeds up ANY game by helping with the physics calculations alongside the cpu then that would be amazing and well worth the money
    Ain't gona happen. :)
    gline wrote:
    but eventually we will probably all have to have a ppu card to play the latest games.... but it is not time to get one yet ;)
    True however like I said I don't think its going to be in its current add-in card form...


  • Closed Accounts Posts: 9,538 ✭✭✭btkm8unsl0w5r4


    astrofool wrote:
    That "modem riser" is a PCI-E x4 slot, and could fit any potential PCI-E physics card (it can take PCI-E x1 cards as well). Check page 2-3 and 2-16 of your manual.

    Excellent.....thank you...and I see a PCI-e 4x version is coming soon....well you learn something new every day
    http://www.gdhardware.com/interviews/agiea/1.jpg


  • Moderators, Society & Culture Moderators Posts: 9,689 Mod ✭✭✭✭stevenmu


    krazy_8s wrote:
    I dont trust what the Developers of GRAW say, because they are a non-biased source. There was a exspected drop in Fps, but never to the degree that is being shown in current systems (25% drop I think). The extra rendering being done by the gfx is not enough to account for the loss in Fps in GRAW. I cant say this will be the case in any other games because I havn't seen or heard of anything upcoming beside 2k7 thats worth talking about.
    Well they're a pretty unbiased source IMO, but anyway, a 25% drop in GRAW isn't really surprising at all seeing as it's pushing many current systems quite hard already, it doesn't take a lot extra to see some serious performance drops.
    krazy_8s wrote:
    Agreed they dont need the bandwidth, but they do need the extra latency provided by the architecture of the pci-ex bus. In the opinions of some respected members of the overclocking community the ppu would be best served with a direct link straight to the cpu, best done on the mainboard pcb somewhere close to the socket.

    Although the ppu is a good idea, it still puts more pressure on the cpu which has already become the bottleneck in high-end rigs. Adding to that the limitation of the gpu trying to render with the ppu running at full capicity you would have a un-playable game.
    I suppose it's hard to say conclusively what difference latency will make between PCI and PCI-e based cards, I still don't expect it will be a huge amount, but I suppose for anyone considering buying one it could be worth waiting to see.

    edit: among the many other reasons to wait and see :)


  • Registered Users Posts: 1,226 ✭✭✭stereo_steve


    With dual core processors and even AMD with their upcoming 4 core processors, would it it not be better to use one core for physics? It would be cheaper and take up less space in case/heat in case/power in system?

    Or am i missing something?


  • Advertisement
  • Registered Users Posts: 13,995 ✭✭✭✭Cuddlesworth


    stevenmu wrote:
    edit: among the many other reasons to wait and see :)

    That has to be the most sensible thing anybody has said so far. Its true I have no idea how well the ppu will do as a pci-ex card in real life. In theory I have some idea.
    With dual core processors and even AMD with their upcoming 4 core processors, would it it not be better to use one core for physics? It would be cheaper and take up less space in case/heat in case/power in system?

    Or am i missing something?

    You are missing something. A ppu is a processor designed to complete certain calculations based on physics algorithims. A gpu is a processor designed to complete certain calculations based on direct 3d algorithims. A cpu can do both these thing but cant do either very well. So a multi core rig can make a spare core do all the work but that core can never compete with a chip soley designed to do the one specific thing.


Advertisement