Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Nvidia RTX Discussion

Options
14142444647209

Comments

  • Registered Users Posts: 13,986 ✭✭✭✭Cuddlesworth


    BloodBath wrote: »
    No way is the 3090 going to be anywhere near that cheap if it's the titan replacement which it seems to be.

    It's supposed to be a 850mm square die with 24GB of GDDR6. Expect at least 3 grand.

    To quote this forum, well worth the extra xxx euros in my book.


  • Registered Users Posts: 5,574 ✭✭✭EoinHef




  • Registered Users Posts: 13,986 ✭✭✭✭Cuddlesworth


    If I'm not mistaken, that means they are pulling the maths co-processors cores out into their own chip?


  • Registered Users Posts: 5,929 ✭✭✭Cordell


    No, it means they will split the GPU in 2 chips, one generic non RT GPU and one tensor cores / RT co-processor. Which makes a lot of sense if they can interconnect them with a fast bus.
    The generic math co processor cannot be separated from the CPU, it's not a distinct unit anymore and connecting it via PCI Express is not possible, it's way too slow. The GPU itself can be considered to be a math coprocessor, but with very specific functions and functionality.


  • Registered Users Posts: 13,986 ✭✭✭✭Cuddlesworth


    Cordell wrote: »
    No, it means they will split the GPU in 2 chips, one generic non RT GPU and one tensor cores / RT co-processor. Which makes a lot of sense if they can interconnect them with a fast bus.
    The generic math co processor cannot be separated from the CPU, it's not a distinct unit anymore and connecting it via PCI Express is not possible, it's way too slow. The GPU itself can be considered to be a math coprocessor, but with very specific functions and functionality.

    Thats pretty much what I was saying. From what I understand, Game Engines use mostly integer calculations, while general purpose compute it depends on the requirements. In Turing, they reduced the floating point performance of the Cuda core infrastructure, moving it to the Tensor cores as dedicated co-processers and using gamers to justify the R&D to do so.

    I remember somebody in the tech community saying at the RTX launch that once the product is in the market and the chips fabbed out, they will end up moving them into their own chip to reduce manufacturing costs for gaming cards.


  • Advertisement
  • Registered Users Posts: 5,929 ✭✭✭Cordell


    The rendering pipeline is mostly single precision floating point. Tensor cores are the equivalent of SSE/AVX to the CUDA cores - they are very fast with matrices operations, but they don't replace the CUDA cores, they complement them.


  • Registered Users Posts: 18,706 ✭✭✭✭K.O.Kiki




  • Registered Users Posts: 10,299 ✭✭✭✭BloodBath


    30% rasterisation improvement is not that impressive but nice all the same.

    It's the other RT and Tensor improvements I'm interested in.


  • Registered Users Posts: 7,180 ✭✭✭Serephucus


    We're still dealing with late-stage ES stuff at this point, so clock won't be final. 30% is less than I would have expected for raster though, but RT will be the main one. Is Control at 144 FPS too much to ask?


  • Registered Users Posts: 18,706 ✭✭✭✭K.O.Kiki




  • Advertisement
  • Registered Users Posts: 237 ✭✭Komsomolitz




  • Registered Users Posts: 10,299 ✭✭✭✭BloodBath


    4 ray traced features. I can't see the 20 series handling that too well unless you want to play at < 30 FPS.


  • Registered Users Posts: 18,706 ✭✭✭✭K.O.Kiki


    BloodBath wrote: »
    4 ray traced features. I can't see the 20 series handling that too well unless you want to play at < 30 FPS.

    The only upside to WFH is that I've been able to save up enough for a new GPU if I wanted one :pac:


  • Registered Users Posts: 5,574 ✭✭✭EoinHef


    Would that not be why there touting DLSS?

    That should help the 20XX series. Im sure it will be well tuned for Cyberpunk aswell givin the size of the game launch and how much anticipation there is around it.


  • Registered Users Posts: 10,299 ✭✭✭✭BloodBath


    According the laymen gaming,who played the PC version for 5 hours, it ran around 60FPS at 1080p with a 2080ti with all the bells and whistles turned on. It wasn't locked to 60 either and went up at times especially when driving they said. I assume some of the RT stuff is disabled while driving.

    1080/60 with a 1k gpu is not exactly great.


  • Registered Users Posts: 5,574 ✭✭✭EoinHef


    BloodBath wrote: »
    According the laymen gaming,who played the PC version for 5 hours, it ran around 60FPS at 1080p with a 2080ti with all the bells and whistles turned on. It wasn't locked to 60 either and went up at times especially when driving they said. I assume some of the RT stuff is disabled while driving.

    1080/60 with a 1k gpu is not exactly great.

    No that doesn't sound great,who are laymen gaming though?

    Are they not a game review rather than hardware? Not sure id trust game journos to know what there doing


  • Registered Users Posts: 10,299 ✭✭✭✭BloodBath


    Sorry I misquoted them. They said a minimum of 60fps going up as high as 100 while driving so not as bad but still not great.

    I don't think the 2060/2070 class cards will fare too well. This game will be used to push the 3000 series cards.


  • Registered Users Posts: 5,929 ✭✭✭Cordell


    Probably this: https://www.youtube.com/watch?v=WarYN1tRS1o
    It's not clear if it was 1080p with DLSS meaning a lower internal resolution, or 1080p was the render resolution.


  • Registered Users Posts: 10,299 ✭✭✭✭BloodBath


    Around 6:45 into the video specifically.


  • Registered Users Posts: 18,706 ✭✭✭✭K.O.Kiki


    BloodBath wrote: »

    1080/60 with a 1k gpu is not exactly great.

    On the contrary, I think this is superb - because if the 2080 Ti struggles that much, Nvidia must have done some great advancements over the last 2yrs if they're ready to implement all this RT on their next-gen cards.

    Also don't forget that Cyberpunk is an open-world game, so it would be a fair bit more taxing than, say, CONTROL.


  • Advertisement
  • Registered Users Posts: 5,574 ✭✭✭EoinHef


    There could be a menu of options in game allowing choice of what RT effects to use. That would be a nice middle ground.

    Givin the delays of the game im not sure we can draw any solid performance conclusions either,maybe indicative but for all we know there could still be a load of optimisation to be done as they have said the game is done its bug fixing,polish etc thats causing delay.


  • Registered Users Posts: 10,299 ✭✭✭✭BloodBath


    K.O.Kiki wrote: »
    On the contrary, I think this is superb - because if the 2080 Ti struggles that much, Nvidia must have done some great advancements over the last 2yrs if they're ready to implement all this RT on their next-gen cards.

    Also don't forget that Cyberpunk is an open-world game, so it would be a fair bit more taxing than, say, CONTROL.

    I'm not ****ting on RT. It doesn't change the fact that the 2000 series cards were a beta test for it. We're looking at a 4x improvement on the the 3000 series.

    Taking the load of some rendering tasks off of the normal pipeline also frees up resources to actually improve performance in that area as well. They just need to get the balance right.


  • Registered Users Posts: 18,706 ✭✭✭✭K.O.Kiki


    We have no benchmarks to support a 4x improvement.


  • Registered Users Posts: 10,299 ✭✭✭✭BloodBath


    Not yet but let's assume it's true. It's an area they could easily achieve a 4x improvement in and an area they will dedicate more and more gpu die space to over time.

    The current 2000 lineup is badly bottlenecked by it. It would take a 4x improvement to bring it more in line without costing many frames.


  • Registered Users Posts: 655 ✭✭✭L


    Cordell wrote: »
    No, it means they will split the GPU in 2 chips, one generic non RT GPU and one tensor cores / RT co-processor. Which makes a lot of sense if they can interconnect them with a fast bus.


    So, I said this back at RTX launch, is there any technical reason that RT should be implemented on the same card rather than as an addon card?

    I get the cynical sales reasons, but surely two chips means they're not a million miles away if they can bridge the cards adequately.


  • Registered Users Posts: 462 ✭✭tazzzZ


    L wrote: »
    So, I said this back at RTX launch, is there any technical reason that RT should be implemented on the same card rather than as an addon card?

    I get the cynical sales reasons, but surely two chips means they're not a million miles away if they can bridge the cards adequately.


    I believe they cant get a connection quick enough or with the desired latency without having it on the same PCB. Again just a rumour i heard and maybe your method is perfectly doable.


  • Registered Users Posts: 5,929 ✭✭✭Cordell


    L wrote: »
    So, I said this back at RTX launch, is there any technical reason that RT should be implemented on the same card rather than as an addon card?

    I get the cynical sales reasons, but surely two chips means they're not a million miles away if they can bridge the cards adequately.

    If I am to speculate probably it can be done, but with significant compromises. Even if they use some sort of NVLink to bridge the cards it won't be the same as having the chip(lets) very close together. Also, the end user will need a system that supports this arrangement (think SLI ready motherboards). So probably there is no market for such a solution.


  • Registered Users Posts: 7,180 ✭✭✭Serephucus


    I don't think NVIDIA would ever go this route, tbh.

    It's what they had when they bought AGEIA back in the day; To take advantage of PhysX, people needed these specific cards, so no-one bothered buying the games. Once NVIDIA integrated PhysX into their GPUs, then that was no longer an issue.

    tl;dr - It could be done, but it would ruin already struggling adoption rates.


  • Registered Users Posts: 655 ✭✭✭L


    Serephucus wrote: »
    I don't think NVIDIA would ever go this route, tbh.

    It's what they had when they bought AGEIA back in the day; To take advantage of PhysX, people needed these specific cards, so no-one bothered buying the games. Once NVIDIA integrated PhysX into their GPUs, then that was no longer an issue.

    tl;dr - It could be done, but it would ruin already struggling adoption rates.

    That's more or less what I figured - by adding them to the main GPU, they can sell "new cards" with "new features", whether or not they're actually desirable on their own merit, or whether or not the new card really has much to offer in raw performance.


  • Advertisement
  • Registered Users Posts: 13,986 ✭✭✭✭Cuddlesworth


    If they sold a GTX 2080ti alongside the RTX 2080ti for 200 quid cheaper, I'd guess the GTX card would have heavily outsold the RTX card.


Advertisement