[ 3 / biz / cgl / ck / diy / fa / g / ic / jp / lit / sci / tg / vr / vt ] [ index / top / reports / report a bug ] [ 4plebs / archived.moe / rbt ]

Due to resource constraints, /g/ and /tg/ will no longer be archived or available. Other archivers continue to archive these boards.Become a Patron!

/g/ - Technology

View post   

[ Toggle deleted replies ]
File: 164 KB, 779x1038, IMG_20160809_144752.jpg [View same] [iqdb] [saucenao] [google] [report]
55993351 No.55993351 [Reply] [Original] [archived.moe] [rbt]

Did i make the right choice?

>> No.55993375

you posted a /v/ thread on /g/

>> No.55993403

Why would you buy a gaming video card? How are you supposed to run your neural net off a gaming video card?

Oh, I see, you're on the wrong forum

>> No.55993414

You are in the wrong website

>> No.55993427
File: 47 KB, 599x418, 1698f574753cde67a7ffff73ef0ea583.jpg [View same] [iqdb] [saucenao] [google] [report]


>> No.55993432

>g1 gaming 1060

There's literally fuck all difference between a bottom end shit brand 1060 like a gainward single fan one compared to an expensive gigabyte one. You should have saved some money and bought a lesser known brand. You only need an expensive brand if you're buying a high end card where you'd be looking to push it as far as you can and would want high quality vrm's and all that.

>> No.55993439

Found the AMDrones :^(

>> No.55993444

Its a tech purchase thus g related

>> No.55993453

Yea i know i might have overpayed for it all those rgb lighting and shit is not needed but this was the only and cheapest 1060 available.

>> No.55993480

noun, a meeting or medium where ideas and views on a particular issue can be exchanged.

>> No.55993490

You can actually, you don't need a Tesla

>> No.55993499

>ideas and views on a particular issue can be exchanged
>ideas and views
>can be exchanged

>> No.55993501



S'alright I guess

>> No.55993514

You're a fb tier meme moron probs 16-19 years old

>> No.55993587


80% of /g/ are /v/ threads get over it faggot

>> No.55993611

>look at that fucking home
>those multiple screens
>comes with a bag, not with a package

Anon 's all good but question is, why didn't you "invest in the future" and buy the 1080

yes, im a eurofag

>> No.55993626

>No async
>No DX12
>No Vulkan
>nvidia gimpworks
sorry to tell you but-

>> No.55993647

>a Huang agent silently eliminated the dissent

>> No.55993648
File: 2.68 MB, 2976x2976, 20160729_114705.jpg [View same] [iqdb] [saucenao] [google] [report]

You made a pretty good choice.

>> No.55993665

>DX12 in box
Nvidia and their lies, haven't they learned anything from their 3.5?

>> No.55993697

LMAO, people actually buy this meme card.

>> No.55993707

Last I checked it supports DX12, if not in the way you want it to support DX12.

Fact of the matter is that you can play games that require DX12 with it, and that is all that matters.

>> No.55993729

You did good.

>> No.55993743

>no async

1060 has async and does it great actually.

>no DX12

1060 does DX12 great and actually beats the 480 in DX12 benchmarks.

>no Vulkan

1060 has Vulkan and actually beats the 480 in Vulkan

>> No.55993757

Have you not been paying attention, the last 15 years?
It's the same story every fucking time.
ATI/AMD come up with cards that are support full feature set of new Directx.
Nvidia's half assed implementation blocks developper from using its features.
And somehow people forget.
But it's been going on since DirectX8

>> No.55993771

Depends how long you're keeping it for, Because Nvidia cards age terribly. If you're keeping it until the midrange card next year then you're fine, but if you're keeping it for two years then you're fucked. AMD has repeatedly shown that they eventually overtake Nvidia cards in the same price Bracket, and this is just in DX11. DX12 and Vulkan are here and are getting adopted at an astonishing rate. Needless to say, AMD gets more out of DX12/Vulkan than Nvidia does because their architecture is more or less designed for it.

>> No.55993783

I didn't even know this because I was a console babby, can you explain exactly how or link me to a place that does?

>> No.55993794
File: 480 KB, 757x384, vulkan.png [View same] [iqdb] [saucenao] [google] [report]

1060 beats the 480 in Vulkan

If you want a future proof card, you get a 1060

>> No.55993813

Nice meme

The Talos principle was the first vulkan game, because they just quickly ported it to vulkan. The other rendering pipelines are in place, they just plugged vulkan into it.

>> No.55993815

This one is better imho:


>> No.55993830

Well I can't really do that, as I'd have to go through all cards specifications since 2001.
DirectX9 era was the best exemple, I think.
Although this whole DirectX12 thing is starting to look like a serious competitor.

>> No.55993862

el peruANO

>> No.55993867

I wonder how much gimpvidia/Micro$oft paid them to half ass a vulkan piece of crap.

>> No.55993890

None, they put out the Vulkan support after two days of it being out. It is a quick and dirty port and they probably just did it to experiment with the new API

>> No.55993899

And Doom hasn't been properly done for Nvidia yet. What's your point?

>> No.55993930

>it's all just a big conspiracy

Sorry but the 1060 is better at Vulkan. It's way more future proof than the 480.

>> No.55993939

>nvidia sponsored dx12 title
>1060 better than 480 at vulkan

>> No.55993949
File: 19 KB, 688x371, vulkan-doom-1080p.png [View same] [iqdb] [saucenao] [google] [report]

Nvidia doesn't have special features on Vulkan, everything they are capable of is on display in DOOM. However, I know you're not going to accept this, so i'm just going to let you be a retard after this.

>> No.55993956

And you base that on the only benchmark that perform worse on new APIs than openGL/Dx11.
The level of denial here is pretty much through the roof.

>> No.55993967

>denial through the roof

Funny coming feom a failmd 480 fanboy desu

>> No.55993971

Maybe if you saw my other comment
You'd know that I didn't think it was a conspiracy.

I honestly have a hard time telling if you Nvidia fanboys are actually delusional fanboys or just meming.

>> No.55993977

>using a buggy Vulkan implementation

>> No.55993985

that's some backed up argument right there

>> No.55993991


Do you even know how to speak or read Japanese? If not, fuck off. If so, still fuck off for being retarded.

>> No.55993993

>not in the way you want it to support DX12

Me and all the game developers around the world.

>> No.55993997
File: 117 KB, 645x868, Capture.png [View same] [iqdb] [saucenao] [google] [report]


>Biggest expected gains are with AMD cards – not only due to Vulkan, but also due to AMD specific GPU low level optimization. This was achieved via extensions that AMD made specifically available for Vulkan, in particular AMD Intrinsics. Other thing is that Async Compute is immediately available for AMD cards

>> No.55994017

The only benchmark?

There are literally only two Vulkan games benchmarked and the 1060 beats the 480 in them overall by 27.13%

Face it, the 1060 is way better in Vulkan.

>> No.55994028

Well you're a retard.

>> No.55994044

Yup. Doom Vulkan is specifically optimized for AMD, which is why it runs so well.

Just like Hitman runs well on AMD because it's literally sponsored.

When you take a neutral Vulkan game like Talos Principle, you see that Nvidia is actually way faster at Vulkan.

>> No.55994059
File: 96 KB, 790x336, doom_vs_nvidia.png [View same] [iqdb] [saucenao] [google] [report]

correct, nvidia has their own intrinsics but only AMD are enabled for now

>> No.55994065

>on AMD cards, select anti-aliasing modes TSAA or NO AA to make sure Asyncronous Complute is enabled
What the fuck?

>> No.55994071

forum lmfao what a clown, even if it is a noun don't use it here faggo this is a BOARD and thats it nig

>> No.55994091
File: 142 KB, 1171x661, upload_2016-7-29_11-40-24.png [View same] [iqdb] [saucenao] [google] [report]


>> No.55994118

If you dont care about the high dpc latency, sure

>> No.55994134

Dude, a rx480 absolutely rapes a 1060 in Doom Vulkan. What the fuck are you talking about?

>> No.55994140
File: 22 KB, 688x371, gtx-1060-bench-doom-1080.png [View same] [iqdb] [saucenao] [google] [report]

>not knowing how statistics work this hard.

Being "better" at Vulkan isn't measured in what card performs better overall, it's which card gains more performance from it.

That being said, AMD still wins overall in DOOM

>> No.55994162
File: 41 KB, 712x503, gtx-1060-bench-talos-1080.png [View same] [iqdb] [saucenao] [google] [report]

>Talos Principle is a good benchmark

>> No.55994171


When you combine the numbers of the two Vulkan games available, the 1060 comes out on top at 27.1% faster than the 480.

Objectively the 1060 is faster than the 480 at Vulkan.

>> No.55994180

AMD only gains more because they are shit in OpenGL. Look at how pathetic they are in OpenGL vs the 1060, thats roughly GTX 960 performance

>> No.55994190

I give up man. You win.
Let's have this talk again in 1 year.

>> No.55994200
File: 837 KB, 1096x629, 1468524625880.png [View same] [iqdb] [saucenao] [google] [report]


>> No.55994204


The circlejerk in this place looks pretty healthy. Also, I wonder how many of these (You)s are the same person.

>> No.55994227

Refer to

>> No.55994240

You're not refuting my point? Did you give up on the original point and now have to argue semantics? I'm honestly shocked an Nvidia fanboy would stoop this "low"

>> No.55994247

Enjoy your shit tier build quality and wind tunnel grade acoustics.

>> No.55994255

You can't deny that AMD is shit in DX11 and OpenGL. The hardware is there, actually looking much more capable in paper vs nvidia counterparts but falls short in actual performance. They had to call their ACE (which can only be enabled in DX12)which is sitting idle there for so long to NORMALIZE their performance and be actually competitive.

nvidia on the other hand, can do this "async" even in DX11 so nvidia has been doing it longer than amd does

>> No.55994278

>expected gains

Oh amd, you silly

>> No.55994291

Yup. When you put the numbers together, it looks really bad for AMD in OpenGL and DX11.

1060 is 15.25% faster than 480 in DX11
1060 is 14.56% faster than the 480 in OpenGL

>> No.55994376

It's also a pricier card.
You'd expect it to perform better.

>> No.55994409

It's only $10 more

>> No.55994415

>waaah, why isn't my argument that performance should be measured by a single, developer-admitted AMD-only optimized game working? IT MUST BE EVERYBODY ELSE WHO'S WRONG!

>> No.55994518

Not a bad choice
Nvidia has NO Hardware Async and its disabled by default by nvidia drivers.
But, their preemption algorithm got better so Vulkan and dx12 works better compared to the 900 series

I'm a 1080 owner btw

>> No.55994527
File: 10 KB, 184x184, buckle up.jpg [View same] [iqdb] [saucenao] [google] [report]

>nvidia on the other hand, can do this "async" even in DX11 so nvidia has been doing it longer than amd does
Christ, could you possibly be more tech-illiterate? There's literally no such thing as asynchronous compute in DX11. It's not possible due to the limitations of the API. Nvidia's performance lead there has fuck all to do with asynchronous compute. Stop spouting terms you have no understanding of.

As for the idea that Nvidia are in any way ahead of the curve hardware-wise, their last-gen cards can't even do asynchronous compute in DX12 ffs.

>The reason Maxwell doesn't take a hit is because NVIDIA has explictly disabled async compute in Maxwell drivers. So no matter how much we pile things to the queues, they cannot be set to run asynchronously because the driver says "no, I can't do that". Basically NV driver tells Time Spy to go "async off" for the run on that card. NV driver runs asynchronous tasks in one queue on Maxwell, similar to if they were submitted in one queue ("async off" in Time Spy). If NVIDIA enables Async Compute in the drivers on Maxwell, Time Spy will start using it. Performance gain or loss depends on the hardware & drivers.

Even Pascal is propped up by a software solution. Meanwhile AMD cards from 2011 can do it natively at a hardware level.

>> No.55994559

Well, where I live, it's actually start at +50€ vs a rx480.

>> No.55994573

>Nvidia has NO Hardware Async

Uhh yes it does.


>> No.55994579

1080 owner-
Pascal uses software Async(preemption)
That's why we see small 10℅ or less gains in Vulkan Dx 12ex.
It doesn't Matter though
The high end GPUs are retarded fast in dx11

>> No.55994594


"Don’t rely on the driver to parallelize any Direct3D12 works in driver threads

On DX11 the driver does farm off asynchronous tasks to driver worker threads where possible – this doesn’t happen anymore under DX12
While the total cost of work submission in DX12 has been reduced, the amount of work measured on the application’s thread may be larger due to the loss of driver threading. The more efficiently one can use parallel hardware cores of the CPU to submit work in parallel, the more benefit in terms of draw call submission performance can be expected."

>> No.55994600

>Posts review
>No proof.
Nvida uses preemption, NOT Hardware Async.
Its NOT capable of hardware level Async.

>> No.55994601


You have no idea what you're talking about.

Read this: http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/9

>> No.55994606
File: 718 KB, 730x730, 1470242372496.png [View same] [iqdb] [saucenao] [google] [report]

No, you didn't wait for vega.

>> No.55994607

in burgerland perhaps, on other regions, a custom 1060 is cheaper than a reference 480

>> No.55994611

It's literally right there. The 10 series has full async support right in the hardware.

>> No.55994614

>Link offers no proof
Dude, just stop.

Pascal Preemption isn't Async.

>> No.55994621

Uhhh it's right there, their entire architecture is built for async.

>> No.55994623

repeat NOT on the hardware.level
Again, preemption is NOT Hardware Async

>> No.55994626

Di you even read AND analyze the article?
you are an idiot, thinking the term async is universal when multi engine is the correct term. Async was popualrized by AMD and nvidia had their own methods that achieve the same goal.

>> No.55994629

Preemption isn't asynchronous compute, you tech-illiterate retards.

Back to /v/, kids.

>> No.55994633

Yes it does. Read the fucking article, 10 series has full hardware async support.

>> No.55994646

No, because pascal is just modified 900series.
It is physically incompatible with hardware Async.

Nvidia ditched the hardware scheduler with the 580 and beyond

>> No.55994657

No it's not. You really should read the article: http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/9

10 series has full hardware async support.

>> No.55994662

They also said the 980ti was dx12 compatible.
Oh ya, it wasn't.

A tech review isn't proof.

No Async
Only preemption

>> No.55994687
File: 48 KB, 560x577, smug fox.jpg [View same] [iqdb] [saucenao] [google] [report]

>nvidia had their own methods that achieve the same goal
Yeah, except it doesn't. The performance uplift from preemption performed in software is nowhere near that seen by a hardware solution.

No it fucking doesn't. The article literally explains to you how Nvidia are still handling scheduling in software, yet you're so tech-illiterate that you don't even understand the words that you're reading.

Imagine actually posting the proof that you have no idea what you're talking about so that other people can laugh at you.

>> No.55994714

Uhhh seriously you might want to do some research.

There are several reviews that show the 10 series has full hardware async support.

Here's another: http://www.pcper.com/reviews/Graphics-Cards/GeForce-GTX-1080-8GB-Founders-Edition-Review-GP104-Brings-Pascal-Gamers/Async

>> No.55994724

You really are not very smart are you. The fundamental hardware differences in the 10 series are designed for async. That's why it's so good at VR.

10 series has full hardware async support.

>> No.55994734

enjoy the better built and more efficient card anon. and watch pajeets cry itt

>> No.55994737

>The performance uplift from preemption performed in software is nowhere near that seen by a hardware solution.
it may not be as fast BUT IT STILL ACHIEVE THE SAME FUCKING RESULT. Proof? look at all them pascal having performance gains in async.
>but muh time spy isn't true async
HAHAHAHAHAHA what an idiot

>> No.55994738

Funny. Talos is just a Vulkan wrapper for OpenGL. Even the devs admitted it is a pile of shit and they just did it to be first out the door saying they had Vulkan. Just tagging it over OpenGL does not an API make.

>> No.55994840
File: 29 KB, 556x303, 466ec443f98cae788d4fc033f39544050f0650d997ac0274ce301a17e45ec37b.jpg [View same] [iqdb] [saucenao] [google] [report]

Nvidia posters are only here to do one thing and one thing only. Shill. Don't even try to reason with them it's a losing battle.

>> No.55994847

>the truth is shilling


>> No.55995170

>what is dx11_3

>> No.55995205

Explain further.

>> No.55995238


>amd has better performance in a game
"Amd showing the world how it's done !! hahaha nvidiots"

>amd gets blown the fuck out by a cheaper nvidia card
"N-nvidia are gimping a-amd!"
"It's all a conspiracy by M$ and nvidia!!!"
"Amd never lose reeeeeee"

Can we just purge amd fanboys for good? As I've seen literally everywhere, it's impossible to have a decent discussion with them, without them pulling out the conspiracy card.

>> No.55995248
File: 202 KB, 1924x1083, JF7ngP5.png [View same] [iqdb] [saucenao] [google] [report]

Just gonna leave this here

>> No.55995307

>nvidia doesnt have true async!
>their graphics cards are getting propped up by the cpu!

>> No.55995356
File: 239 KB, 581x597, Capture.png [View same] [iqdb] [saucenao] [google] [report]

Also, AMD literally needs a top of the line i7 to stay competitive.

>> No.55995361


No, it's an autistic card made by an autistic company, supported by autistic customers... I'm sorry you fell for it.

>> No.55995386

poolaris is amd worse card since 2900. miner save amd this time

It wont beat current 1080 sadly, and nvidia still got 1080ti if anytin

>> No.55995427

>pascal doesnt support hardware async!
Do people here even read?

>> No.55995442

Not this meme again... The new CPU is a high end Skylake chip and the next chip on the list is a mid range part from 7 years ago. Someone buying an RX 480 is obviously going to have a CPU much faster than the old chips on this chart, making it completely meaningless.

But hey, n/v/idiots goona n/v/idiot...

>> No.55995460

Lol the miners will kill amd after their cards keep dying and they get replacements from amd. They're single handedly going to fuck up amd in both pricing and stock. Doing gods work.

>> No.55995473

see >>55995356

>> No.55995474

>Preemption in hardware
>The same as proper async compute
By that logic desktop CPUs have had async compute since the 1980's and mainframe ones since the 1960's...

>> No.55995478
File: 566 KB, 1920x1080, Screenshot_2016-08-09-11-14-59.png [View same] [iqdb] [saucenao] [google] [report]

In cpu bottlenecked situations the 1060 is affected wayyyy less even with current fx cpu. When it's removed it's another story though.

>> No.55995481

Just check the 2900 benchmark. is far better then current polaris line up, 460 cant even beat 750 right, huge failure

>> No.55995493

>Using a DX11 game nobody plays except as a benchmark
>Somehow relevant in relation to a Vulkan game benchmark
As I said, n/v/idiots gonna n/v/idiot...

>> No.55995520

What it shows is bad CPU reliance from the graphics card/driver.
The games themselves aren't CPU bound, the graphics driver is. And that's terrible.

>> No.55995531

>cant read

>> No.55995534

Did you not read what I posted? They introduced a bottleneck by using a mid range part from 7 years ago. In terms of age we're talking about a difference roughly the same as between a 286 (introduced in 1982) and a 486 (introduced in 1989).

>> No.55995541

Is it true that there will never be a non biased scientifical article made solely for the purpose to determine which card is better? With scientific and undeniable proof.

Even if there was one, everyone would just say the researcher was bought and is biased?

>> No.55995545

>AMD CPUs bottleneck AMD graphics cards
You can't make this shit up.

>> No.55995576

Yes, a CPU reliance that requires a hampering the system with a 7 year old CPU to properly represent. Nvidia is also CPU reliant to some extent, it's just that it's going to take and even older CPU to bring it out.

>Simulate asyc compute using preemption (something CPUs have done since the 1960s)
>Somehow the same as proper hardware async
>Accusing me of not reading
Oh well... Fanboys gonna be fanboys...

>> No.55995592

And did you not read what I posted? I gave a graph of a popular amd cpu and showed how the rx480 is still hit harder by a cpu bottleneck than the 1060. Both my graph and the other graph with older cpu show the exact same thing. Are you going to claim now that nobody owns fx 8350 like you said nobody owns 7 year old cpu? You are honestly the most retarded person in this thread. Congrats.

>> No.55995625

Why can´t we have snacks back and ban every fucking retard on this board

>> No.55995643

The FX-8350 (a 4 year old chip at this point) is known for bad per-thread performance (probably not that far off from the 750) and games in general tend to rely pretty heavily on one or two threads.

While there's more people still playing on an FX-8350 out there, there's a lot more people with Intel chips that don't suffer from bad per-thread performance. You didn't present a general trend, you just presented a specific scenario where the RX 480 is hamstrung by the CPU.

>> No.55995645

You're a dumbass. 10 series has hardware async and preemption. They are separate things.

>> No.55995647

Preemption was something the Maxwell2 GPUs had to do because they couldn't dynamically reallocate shaders from graphics to compute, or visa versa.

Pascal can.

>> No.55995652
File: 200 KB, 500x288, 1465584330938.png [View same] [iqdb] [saucenao] [google] [report]

Nvidia does not have hardware Async
As much as you guys can say Nvidia can Async it cant do it on a hardware level. Anything done on a hardware level will beat out anything on a software level. This is also the reason AMD GPUs require more power. AMD believes in hardware solutions over software. So when people are like AMD GPUs use so much power I know it's because the GPU is actually doing the work and not trowing it to my CPU like Nvidia does with WarHammer. NVIDIA HAS NO HARDWARE ASYNC SUPPORT, GET OVER IT!

>> No.55995670

Full breakdown of the 1060 vs 480 aggregating multiple benchmarks:


On average, a 1060 is 13.72% better than a 480.

On average, when using DX11, a 1060 is 15.25% better than a 480.

On average, when using DX12, a 1060 is 1.02% worse than a 480.

On average, when using Vulkan, a 1060 is 27.13% better than a 480.

>> No.55995672

Why are you asking this question? Are you here seeking validation? Fuck off nu-chan, this isn't your hugbox or echo chamber.

>> No.55995686

Holy shit what the fuck is going on here.

The 480 must have HUGE driver overhead.

The irony that you have to pair a 480 with a fucking i7 to actually make it perform well.

>> No.55995690

That proves that the claim "DX12 is the same as Vulkan" is false, correct?

>> No.55995696

Nope, Pascall still doesn't have hardware async. It can SIMULATE hardware async using preemption, but it doesn't actually have it. Maxwell may have had some form of preemption, but it was much more limited and could only be done at an early stage of the pipeline. Maxwell can do proper CPU style async which is why it gets a performance benefit from emulating async and not a zero increase or even regression like Maxwell and it's more basic preemption.

>> No.55995702

Uhh the Nvidia 10 series literally has hardware async:


>> No.55995714

Nope it literally has hardware async

>> No.55995720

Nope, they just used a mid range part from 7 years ago to bring it out. Anything more modern isn't going to run into the same issue.

>> No.55995734
File: 838 KB, 1920x1440, terranmasterrace.jpg [View same] [iqdb] [saucenao] [google] [report]

>tfw people actualy buy 1060's and 480's
>tfw I bought up a used oc'd aftermarket 980ti which is close to 1080 performance for 350 euro
>1,5 years of warranty left on it

>> No.55995739
File: 136 KB, 992x774, y.png [View same] [iqdb] [saucenao] [google] [report]


how about you go to Nvidia Whitepaper on Pascal and see what it really does?


>> No.55995741

That doesn't explain why the 1060 performs so well with a midrange CPU, while the 480 performs horribly.

>> No.55995757

Bro you really are not very smart.

Preemption and Async are not the same thing.

The 10 series has both Preemption AND Async.

See: http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/9

>> No.55995759


Also search for Async guess what?!? NOPEEEEE

>> No.55995778

To be fair, the i5-750 is a very old CPU. The i5-4690 came out over two years ago and is well over twice as fast. Most people are using a CPU at least as fast as a 4690.

>> No.55995783


You're saying that like any nvidia owner has ever cared about async. I own a 980 and I play games regularly with people who also own nvidia gpu and none of us even give a shit about async. I play 100% dx11 games like witcher 3 blood and wine and fallout 4 alongside some older dx11 games like borderlands 2 and shit. We've never ever cared about async since it has 0 effect on us and never will with our current gpu. I'll be upgrading this 980 in a year and a half and by then all gpu that are out should be able to take advantage of your beloved async. If anything it's the rabid amd fanboys who keep throwing async into everyone's faces.

>> No.55995792

No matter how many times you call emulation trough preemption (i.e suspending less active threads to run more active ones) based on orders coming from the driver is not the same as dynamically scheduling threads in hardware. You can get somewhat similar results, but you it's not proper async compute.

>> No.55995795

>anything more modern


>> No.55995807

Seriously you haven't even read the article:


10 series literally has a hardware scheduler. Hardware async.

>> No.55995814

The i5 750 was midrange in 2009. Now it's not even low end, it's just obsolete for gaming.

>> No.55995830

>Preemption and Async are not the same thing.

Yes they are the same thing but Preemption is not the efficient way to do it. Thats why AMD pulls ahead with hardware based Async and not software based.

See Video for retards.

>> No.55995842

Nvidia hits a wall less easily than AMD and in this thing they just used chips that hit a wall for AMD, but weren't slow enough to hit a wall for Nvidia.

>> No.55995849

Lol no they are not. Jesus you are dumb as bricks.

>> No.55995855

See >>55995739

The hardware scheduler you're talking about is for hardware preemption, not async compute.

>> No.55995860

Why does AMD have such huge CPU requirements?

The irony that a budget AMD GPU needs a high end CPU to perform well is hilarious.

>> No.55995867

Lol it's literally a hardware scheduler for async compute.

See: http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/9

>> No.55995884

>4-year-old chip known for bad per-thread performance
>Games known to be very heavy on a single thread
While the 8350 is more modern and more commonly used, you're still much more likely to have an Intel chip with much better per-thread performance, eliminating the problem.

>> No.55995923

Nvidia's own white paper > Anandtech shills

Just get over it kiddo...

The 1060 isn't exactly top-of-the-line ether and the 8350 is pretty low end by modern standards. Something like an i5-4690k, a mid range part from from 2 years ago, should obviously do the trick and it's not a high end part unless you really stretch the definition of high end.

>> No.55995944

I guess every reviewer must be a shill:


10 series clearly has hardware async

>> No.55995962

>Lol it's literally a hardware scheduler for async compute.

Yea a hardware scheduler for Preemptive async compute.

Well done Nvidia!

>> No.55995963

Because amds dx11 drivers are very single threaded. Having a beefier cpu with superior single threaded performance will benefit amd way more than nvidia. Nvidias drivers tend to take advantage of more threads.

>> No.55995969


haha no

>> No.55995975

You do realize that you're now claiming that review sites know Nvidia's hardware better than Nvidia themselves? Right?

>> No.55995977

>$250 CPU is midrange

Top kek.

Gotta love it when you buy a cheap AMD GPU then have to buy a high end CPU just to make it work right.

>> No.55996000

>Thinks this is high end for a CPU
I guess such is the life of someone still living with their parents or working minimum wage...

>> No.55996005

Uhh Nvidia actually had async compute all the way back in the 7 series, but it wasn't even enabled.

Now they actually enabled it in the 10 series.

You didn't even read the article: http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/9

>> No.55996021


Nigga are you high? I have one of those and that shit was expensive. People buying budget mainstream cards like these won't have that. I know a guy who just bought a gtx 1060 literally 2 days ago and he's got it matched up with an a8-6600 amd apu. He's probably going to upgrade to an i3 6100 in like 6 months but your point of people buying these gpu and having expensive top end cpu already is false.

>> No.55996046

Yep. I actually feel really bad for anyone who gets a 480 thinking to get good performance in their budget build, when you actually need a high end CPU to make it work right.

>> No.55996050

Dude don't listen to this "save money" guy. Its all i hear on /g/ even if your rich and say "money is not an issue" these faggots come out the woodwork yelling how you can save money and you dont beed X because Y and Z value is lost. Fuck his cheap shitbrand 1 fan model. Its not just about price, buy a brand a product you're happy with. Plus as you said it was the cheapest anyway. Fuck im getting flashbacks from asking /g/ what to spend my money on, bargin bin shit and value value VALUE. I did not ask for this.

>> No.55996079

>i7-2600 and 780
>max FPS in Doom Vulkan 63
>This review

I beliebe this not.

>> No.55996104

No matter how many times you post that link, it doesn't make what it says true. Nvidia's own white paper on the GP100 chip clearly contradicts it and there really isn't a higher authority on Nvidia's hardware than Nvidia themselves.

Seriously, get the fuck over it already. GP100 doesn't have actual async compute even if anantech says so.

>> No.55996105

i ordered a 4590 and got sent a 4690k, even so i5s aren't that much

>> No.55996107

I came here looking for gpu advice and read that /g/ has AMD's cock shoved so far up its ass it's poking out of the mouth

>> No.55996123

I installed a Msi armor Oc this weekend. It is overclocked to 2k and doesn't break 70c

>> No.55996135

>anandtech shills
They have proved pretty unbiased in the past and they are using Nvidia supplied slides that demonstrate the hardware async capability of the Pascal GPUs.

I think what you need to get over is the fact that AMD isn't unique.

One other thing you have to remember is that the performance increase you will see from Nvidia in terms of async will generally be lower than AMD, and the reason for this goes way back to around the 200 series for Nvidia.
AMD GPUs have a metric shitton of shader units compared to Nvidia, they're smaller and more flexible, this is what makes the cards so much better at OpenCL and buttcoin mining.
It also means that there is more likely to be idle shaders which async can make use of.
Nvidia runs a tighter ship with larger, less flexible cores. It can async with Pascal but you won't see the same utilization as with AMD because there will be less idle cores around to async tasks into.

>> No.55996137

The fact that one of your friends uses much more of his available budget for GPUs than CPUs than what any sensible person would doesn't mean that this is something people in general do. Try to remember that even the i3 6100 he's planning on upgrading to will bottleneck the 1060.

>> No.55996174

>They have proved pretty unbiased in the past and they are using Nvidia supplied slides that demonstrate the hardware async capability of the Pascal GPUs.
Well here they seem to have ether misunderstood those slides (the guy who founded and used to run the site quit a few years ago) or been intentionally mislead by Nvidia to believe that the 1000-series has proper hardware asych when it doesn't.

The white paper posted by another anon is pretty clear on the matter. Pascal has full CPU-style hardware preemption, not async compute.

>> No.55996187

Uhh that's not true, Nvidia's whitepaper doesn't contradict it at all.

Every single review shows it has hardware async.

Here's another to add to the list:

"Pascal GPUs introduces several new HARDWARE and software features to beef up its async compute capabilities"


>> No.55996199

Every single review shows the 10 series has hardware async.

You're just too stupid to get it.

>> No.55996202

Maybe you should stop linking the Tesla whitepaper.


>> No.55996213

The problem is the CPU will bottleneck the AMD GPU way more.

It means buying an AMD GPU in a budget build is a very bad idea.

>> No.55996250

More proof the 10 series has hardware async compute:

"Pascal is finally offering a solution with hardware scheduled async compute"


>> No.55996255

>Every single review shows it has hardware async.
Nope, reviews show it can EMULATE asyc compute using the preemption I mentioned and get a performance boost trough the improved utilization.

>Still insisting that reviewers know more about Nvidia's hardware than Nvidia themselves

>> No.55996276

Nope it literally has hardware async compute.

>> No.55996283

>Video cards aren't technology
Literally retarded

>> No.55996293

Not sure if you posted that to boost my point or as a bad attempt at countering it because it clearly talks about using preemption to implement async compute.

>> No.55996316

>Nope, reviews show it can EMULATE asyc compute using the preemption I mentioned and get a performance boost trough the improved utilization.
Read >>55996202
and read it CAREFULLY, because no doubt you will get to the Preempt part and think "OMG ALL IT CAN DO IS PREEMPT" and forget the paragraphs that came beforehand.

Preempt is something important that the Pascal GPUs are capable of, but that is IN ADDITION to hardware async compute.

>> No.55996319

You don't even understand what you're talking about.

>> No.55996327

>White paper from the manufacturer clearly talks about preemption and how this can be used to emulate async compute
>Hurr durr review sites know more about Nvidia's hardware than Nvidia themselves!!1

>> No.55996341

Uhh the white paper goes into all the ways Nvidia's 10 series hardware scheduler works. Preemption is just one part of it, the whole thing works towards async.

Read: http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/9

>> No.55996360

Because you can't read.

>hardware async
For overlapping workloads, Pascal introduces support for “dynamic load balancing.” In Maxwell
generation GPUs, overlapping workloads were implemented with static partitioning of the GPU into a
subset that runs graphics, and a subset that runs compute. This is efficient provided that the balance of
work between the two loads roughly matches the partitioning ratio. However, if the compute workload
takes longer than the graphics workload, and both need to complete before new work can be done, and
the portion of the GPU configured to run graphics will go idle. This can cause reduced performance that
may exceed any performance benefit that would have been provided from running the workloads overlapped.
Hardware dynamic load balancing addresses this issue by allowing either workload to fill the
rest of the machine if idle resources are available.

Time critical workloads are the second important asynchronous compute scenario. For example, an
asynchronous timewarp operation must complete before scanout starts or a frame will be dropped. In
this scenario, the GPU needs to support very fast and low latency preemption to move the less critical
workload off of the GPU so that the more critical workload can run as soon as possible.

>> No.55996366

I read them, those "async" paragraphs specifically talk about preemption being used. Here's a quote:
> In this scenario, the GPU needs to support very fast and low latency preemption to move the less critical workload off of the GPU so that the more critical workload can run as soon as possible.

In short: You just helped me prove that Nvidia emulates async compute using preemption.

>> No.55996417

You are really retarded. Preempetion is just one part of it. The 10 series has a load of techniques that all come together for hardware async compute. No emulation.

>> No.55996419

>cherry picking one line out of context
That's nice, but I already posted the kick in your pants.

>> No.55996462

No, preemption is how Nvidia implements it. It's not real hardware async, it's merely getting better utilization trough switching between critical and less critical workloads on the fly.

Proper async compute, which is what AMD does, works by mixing the different workloads dynamically and having them run side-by-side, not by having them take turns based on how heavy those workloads are and how much stress is being put on the GPU.

>> No.55996478

You mean like in >>55996360

>taking turns based on how heavy those workloads are and how much stress is being put on the GPU
I don't think you understand the difference between a heavy workload and a time critical workload.

>> No.55996493

No it's not, it's literally hardware async

See: http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/9

>> No.55996508

>The sentence that explains how it's actually done
>Cherry picking
Sorry, but your "kick in the pants" missed pretty badly. Most of what you posted (which included the core sentence I posted) was about how the GPU figures out when to use preemption and what it's trying to do by doing so.

>> No.55996531

Uhh it literally says exactly how Nvidia implements hardware async

>> No.55996553

>If I post this link one more time it'll definitely prove I'm right even thou the manufacturer's own white paper says otherwise
Just give up already because no matter how many times you post that link, it's not going to prove anything when it contradicts the manufacturer's own white paper.

>> No.55996556

You're right, it uses preemption to do time critical workloads, but asynchronously computes overlapping workloads.

Why is this so hard for you to understand?

>b-but its not the same as AMD!
Their entire fucking architecture is different to AMD.

>> No.55996575

>Uhh it literally says exactly how Nvidia implements hardware async
Yes, in the sentence that I highlighted that talked about using preemption, not asynchronous scheduling.

>> No.55996597

The link says exactly what is in the white paper, that the 10 series has hardware async compute.

You might want to actually read it for once, it goes through all the techniques used for hardware async compute in the Nvidia 10 series:


>> No.55996607

10 series has both preemption and hardware async compute

>> No.55996634

>not asynchronous scheduling.
You just moved the goal posts to something that has already been demonstrated as working on Pascal by the Time Spy demo.

>> No.55996653

>but asynchronously computes overlapping workloads
Nope it says that they solve this by "by allowing either workload to fill the rest of the machine if idle resources are available", i.e preemption. The whole point of preemption is to switch between tasks to utilize hardware is under-utilized or not utilized at all.

>> No.55996670
File: 370 KB, 997x1118, 2.png [View same] [iqdb] [saucenao] [google] [report]

nvidia has little stop light.

>> No.55996689

That's the 9 series.

The 10 series has full async hardware support.

>> No.55996713

>You just moved the goal posts to something that has already been demonstrated as working on Pascal by the Time Spy demo.
Once again, that benchmark showed that Nvidia's preemption based emulated async works, not that they have actual async compute.

I'm just going to point you once again to Nvidia's own white paper on the GP100 chip being used in the 1080 and 1070:

>> No.55996726

You seriously have no idea what you're talking about.

Read this: http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/9

>> No.55996737

Thanks for the white paper, it clearly shows the Nvidia 10 series using hardware async compute.

>> No.55996738


>> No.55996744

>real gold lemon flavor
But why

>> No.55996758

Literally 4 reviews all above show the 10 series with hardware async compute.

Even Nvidia's own white paper shows the 10 series with hardware async compute.

>> No.55996775

But it is not switching between tasks. As plainly explained, when concurrent tasks are running and one finishes before the other, the other can dynamically use the idle resources that have become available.

That is not preempt. It is not switching between tasks.

>> No.55996777

>Async compute
>0 matches
>Only hit for "asynchronous" is for "asynchronous memory copies"
Sure makes a great case doesn't it?

>> No.55996793

I will once again point you to the white paper on the 1080.


>> No.55996797

The white paper goes through all the techniques used for the Nvidia 10 series hardware async compute.

>> No.55996805

It is absolutely delicious and not real gold at all.

>> No.55996807

Async compute is right on page 14 bro:


>> No.55996814

Are the LEDs just ok the gigabyte logo and "FAN STOP" lettering?
I was looking at the G1 1060 to space my 980, and it would be nice if some of the leds had shineback into the heatsink.

>> No.55996821

>Closest thing in the white paper for the chip used in the 1070 and 1080 to make reference to asynchronous compute is asynchronous memory copies
>White paper on the GPU talks about async compute, but on closer inspection explains it's done using hardware preemption
Yes, those reviewers totally know better than Nvidia themselves, right?

>> No.55996834

Nvidia themselves says the same thing: http://international.download.nvidia.com/geforce-com/international/pdfs/GeForce_GTX_1080_Whitepaper_FINAL.pdf

Page 14 async compute

Clearly the Nvidia 10 series has hardware async compute.

>> No.55996841

>but on closer inspection explains it's done using hardware preemption
>on closer inspection
>not just the description of one of two possible use cases, one of which being preemption, the other being concurrent async compute
Is it really this difficult for you to grasp?

>> No.55996844

That's the white paper for the 1080, not the chip in it, and on closer inspection you can see that it actually talks about using preemption to achieve roughly the same result. If the GP100 really had hardware async compute, the white paper would definitely mention it. It's THAT big of a feature.

>> No.55996846

Just on*

>> No.55996854

It shows the Nvidia 10 series has hardware async compute.

>> No.55996867

>That's the white paper for the 1080, not the chip in it
Fucking grasping at straws. Did you even notice that
is the whitepaper for the TESLA P100?

>> No.55996885

>Explains the problem
>Talks about proper async compute and preemption
>However only talks about preemption as something the GPU actually has

Seriously, if the GP100 genuinely had async compute, the white paper on the chip would at least make mention of it and not just talk about the new preemption capabilities.

>> No.55996901

Is the 10 series pascal?

>> No.55996904

>>However only talks about preemption as something the GPU actually has
You missed the entire preceding paragraph?
That's some special ed reading level right there.

>> No.55996910

The GP100 is the EXACT SAME CHIP used in the 1070 and 1080. It would be really weird for Nvidia to disable features on a more expensive card than on a cheaper card.

>> No.55996912

Holy shit, you are trying to hard. Lurk moar

>> No.55996921

It clearly shows it has hardware async compute.

>> No.55996937


10 series = 1060, 1070, 1080

>> No.55996944

>clearly shows it has hardware async compute.
what method of async? Preemption?

>> No.55996950

Then the only thing to conclude is that everything that applies to the GTX1080 also applies to the GP100.

Why you would conclude that anything lacking from the GP100 sheet that is on the GTX1080 sheet means that the chip overall lacks those features is beyond me.

>> No.55996955

Even Wikipedia says Pascal has async:


>> No.55996971

Hardware async compute

>> No.55996985

But I like the way real gold tastes

>> No.55997032

Your "async" literally hinges on one semi-vaguely worded sentence that talks about worklaods being able to "fill up" available unused resources. This one sentence can be interpreted in many ways, one of them is to assume it has real async compute and another is that it's using preemption to utilize the unused resources. After that there's much more talk about preemption than there was talk about async compute, making it pretty clear this was just intentionally misleading wording.

>> No.55997048


Oh then it has hardware to help with the preemption method of Async compute. So yea it's on the hardware level but still using preemption.

>> No.55997050

Let's go over this a bit further.
The GP100 sheet is 44 pages, the GTX1080 is 50 pages. That alone shows there is more in the GTX1080 sheet.

GTX1080 has a section on enhanced memory compression, GP100 does not mention it at all.
GTX1080 has a section on simultaneous multi-projection engine, GP100 does not mention it at all.
GTX1080 has a section on perceptive surround, GP100 does not mention it at all.
GTX1080 has a section on lens matched shading, GP100 does not mention it at all.

Are we seeing a pattern here yet?

>> No.55997058

Preemption is just one part.

The Nvidia 10 series has full hardware async compute.

>> No.55997075

It's actually full hardware async compute.

>> No.55997077

>Then the only thing to conclude is that everything that applies to the GTX1080 also applies to the GP100.
I don't agree with your logic there when the only claim that the 1000-series has async compute is single semi-vaguely worded sentence that can easily be interpreted as implementing async trough preemption and is followed by multiple pages about new preemption techniques.

>> No.55997095

You are on the right side of History, congrats.

Also don't forget to vote for Hillary.

>> No.55997111

Nah man, Trump is the one to vote for if you want a strong Nvidia GPU.

>> No.55997117

>Better memory compression than before
>VR stuff
>Somehow as big of a deal as async compute (a new feature for Nvidia hardware)
If the GP100 genuinely had hardware async compute, then there would be more than that one semi-vaguely worded sentence in the 1080 white paper.

>> No.55997124

Nvidia 10 series clearly has hardware async compute.

>> No.55997129
File: 25 KB, 617x348, not preempt.png [View same] [iqdb] [saucenao] [google] [report]

Luckily they included this handy infographic to clear that up for you.

Preempt would be switching tasks, which is not what is described, and illustrated, as happening here. What is happening here is the compute task is using the newly freed resources.
It is not switching to a different task to use those resources.

>> No.55997139

There's a whole chapter on it, page 14.

Full hardware async compute support.

>> No.55997143

he can't because he is full of shit.

>> No.55997168

People have already pointed out that async is not actually new, it was just a pile of shit that had to be managed perfectly in the past. Now it does not.

>> No.55997179

If you look at it a bit closer, it IS actually describing preemption. The lighter green workload is completely cut off as the GPU changes scheduling to completely work on the darker green workload.

I guess I should thank you for that.

>> No.55997191

It's actually showing full hardware async support in the Nvidia 10 series.

>> No.55997216

Only sentence in that section which actually talks about a solution to it is the first full sentence in page 15 (just above the graph). Even that is vaguely worded so that it can be interpreted as using preemption.

>> No.55997233

>is completely cut off
That's because the task has ended. If it was still ongoing then it wouldn't have ended.

>> No.55997273

Nope, preemption is putting a process in idle so that others can utilize the resources they're using. The graph clearly shows how a process is completely cut off as it goes to idle and is basically textbook preemption.

Sure, the same can be achieved trough async compute, but this definitely doesn't prove it as it also matches textbook preemption.

>> No.55997281

Based on this thread I have come to the conclusion that There are fewer Nvidia Fanboys than i've previously thought.

Based on >>55993794

Where they repeat the same thing over and over again. All they (or he) does is make a low effort reply with either no link or the same link. Then they just sit back and watch people dance for them.

>> No.55997301

can you prove to me they didn't?

>> No.55997303

Compute tasks in games don't end when they're done, they go to idle, usually until the next frame begins rendering.

>> No.55997304

>The graph clearly shows how a process is completely cut off as it goes to idle and is basically textbook preemption.
>Figure 10: Pascal's Dynamic Load Balancing Reduces GPU Idle Time When Graphics Work Finishes Early, Allowing the GPU to Quickly Switch to Compute
How fucking desperate are you?

>> No.55997323

Isn't the burden of proof on the person who makes the first claim?

>> No.55997327

the burden of proof is on the accuser bud.

>> No.55997332

It's actually very clearly hardware async compute

>> No.55997338

Yes, but he isn't coming back :c

>> No.55997341

Nope it's full hardware async compute

>> No.55997349

>"Allowing the GPU to Switch to Compute"
>Somehow not describing preemption pretty much perfectly
Man you're working overtime giving me proof that Pascal is just using preemption to emulate proper async compute...

>> No.55997356

Actually Pascal has full hardware async compute

>> No.55997359

And on Maxwell the idle time would be lost, as demonstrated in the Static Partitioning part of the infograph.

However with Dynamic Balancing compute (or graphics) can move into use that idle time. It doesn't switch task, it doesn't cut it off, it just makes use of the idle resources. That's EXACTLY what it explains, if you bother to read it.

>> No.55997363

Actually, Pascal has full hardware async support.

>> No.55997373


>> No.55997386

>Graph can easily describe two different techniques
>Definitely has to be one of said techniques because of the semi-vaguely worded sentence before it
>Several much less vaguely worded paragraphs about the latter technique just after the graph are somehow irrelevant

>> No.55997388

AMD is worse in async

AMD is worse in DX12

AMD is worse in Vulkan

There's literally nothing AMD can do good anymore.

>> No.55997401

It's pretty clear that it's full hardware async compute

>> No.55997419

The latter technique which it talks about in a completely different scenario, made clear by the opening words.

>Time critical workloads are the second important asynchronous compute scenario.

It is funny that preempt is not mentioned at all in regards to the first scenario, but it must be being used there too!

>> No.55997422

Actually, Pascal is full hardware async support.

>> No.55997431


>> No.55997441

Actually, Pascal is god.

>> No.55997445

Actually Pascal only emulates async compute using preemption.

Preemption is literally all about using idle resources by switching between tasks. As I said, the graph can describe async compute just as well as it can describe Pascal's improved preemption, which allows for stuff like that. Graphics tasks do not just completely terminate at when a frame finishes rendering, they go idle.

>> No.55997458

Pascal actually has full async hardware support.

>> No.55997472

Actually, Pascal is async compute

>> No.55997499

It's a different scenario yes, but you're just assuming that Nvidia wouldn't be economical with the truth. As I said, the wording is suitably vague for the async stuff, but very specific and clear for the preemption stuff. Even the graph could just as well describe preemption as it could describe proper async compute.

Try to remember that what you're reading is a marketing document, not a technical one for software developers.

>> No.55997526

Just fuck off already.

I believe that from what is described in the preceding paragraph and from what we have seen in the Futuremark disclosure that it is not preempt (at least in the majority of cases because preempt can occur whenever a time critical task appears).

>> No.55997554

They actually make it very clear it has hardware async compute.

>> No.55997563

As I said, getting improved results is not actual evidence of hardware async compute being present. CPUs have used preemption for decades already because it actually works and provides an improvement in performance.

>> No.55997565
File: 27 KB, 217x190, 1470379217927.png [View same] [iqdb] [saucenao] [google] [report]

>can't provide sources or refute
>"just fuck off already"

>> No.55997583

Not really when it's all the semi-vaguely worded last sentence of the section on async compute.

>> No.55997591

I'm the one that has actually been arguing with him all this time. I'm just fed up with you dumb cunts just copy pasting the same shit over and over.
Fucking give it a rest or contribute something constructive.

>> No.55997594

This thread is hilarious.

- Every review says Nvidia has hardware async compute
- Nvidia themselves say they have hardware async compute
- Wikipedia says Nvidia has hardware async compute

Yet some random /g/ troll thinks it doesn't. Just hilarious.

>> No.55997605

>Someone repeat one sentence with no references over and over again
>Thinks there is something to refute

>> No.55997607

It's actually very clear that it has hardware async compute.

>> No.55997624

It's actually very clear that it doesn't have hardware async compute.

>> No.55997630

See >>55997594

All evidence shows it has hardware async compute.

>> No.55997668

Is there any simple way we could test hardware async compute?
Like, what would be the simplest application we could write to test it?

>> No.55997671

More like
>Nvidiots post to tech review sites that have been mislead
>Someone posts the white paper on the chip itself
>When confronted with the white paper on the chip itself resort to using an alternative white paper that uses some really vague wording for async compute, but completely clear wording for preemption
>Fanboys go into complete denial mode when confronted with the inconsistency between the white papers and the vague wording on async compute
I guess /v/ is just going to be /v/...

>> No.55997689

Literally all reviews show it has async compute, and Nvidia's own white paper says it has async compute

>> No.55997691

Because emulation trough preemption produces roughly the same results it's pretty difficult. Only way to make sure would be to look at Nvidia's internal documents and they sure as hell aren't going to share those with the public.

>> No.55997700
File: 97 KB, 1500x844, maxresdefault[1].jpg [View same] [iqdb] [saucenao] [google] [report]


have a 34" 3440x1440

1070 or 1080? or wait until ti comes out? or am I just fucked in general.

looking at benchmarks looks like neither of them will run recent shit at 60 fps with max settings

>> No.55997707
File: 17 KB, 650x300, 1070-async.png [View same] [iqdb] [saucenao] [google] [report]

It's called Time Spy and it's already been extensively tested.

Pic related, showing the 1070 using Async Compute.

>> No.55997731

1080 definitely. Or Titan.

>> No.55997740

Actually, I love Pascal Async compute
Pascal has the best Async compute

>> No.55997754

Pascal has the best async compute ever!

>> No.55997766

Didn't you get the memo? AMD is dead, nobody gives two fucks about it. This board is shilling for it because it's contrarian as fuck, same way /tv/ shills for DC or neo-/v/ shills for Sony.

>> No.55997783

1080 if you need it now.
If you can wait, 1080Ti. It will definitely come.

>> No.55997790

Reviews show it can at least emulate async compute decently well while Nvidia's own marketing material ether say it has so, but does so using wording that can just as well be interpreted as emulation trough preemption or then makes no mention of it at all.

In other words: It's nowhere near as clear cut as you make it out to be.

>> No.55997802

All the reviews clearly show it has hardware async

>> No.55997820

It's actually very clear that it doesn't have hardware async compute.

>> No.55997825

How long of a wait do you think it will be? Q4?

No Man's Sky is calling and my 970 runs like piss.

>> No.55997829

Every single source says it has hardware async compute

>> No.55997873

>Using feature level 11_0
>i.e the most basic DX12 feature level)
>No alternative paths for GPUs that can use less basic feature levels
>Somehow a definitive asyc compute bench
Seriously, Time Spy is a genuinely garbage async and DX12 bench in general.

>> No.55997914

>"Hey, if all these sources say the same thing it must be the truth"
>All of them actually use the same press pack from Nvidia
Well kids do tend to be naive so I guess you can't blame these two...

>> No.55997924

Implying it's legal and/or even in the best interests of any company to lie about specifications of their hardware.

>muh press pack is lyin'

Shut up, retard.

>> No.55997954

>"How dare you suggest a company set up to make money would lie or even mislead the press into making their products look better than what they actually are! Next you're going to tell me I'm not actually a nigerian millionaire!"

>> No.55997976

>implying a company would subject themselves to such a stupid fucking risk, opening up a hayday of legal battles including class action lawsuits

Jesus christ, kids in college are fucking dumb.

>> No.55997983

>Next you're going to tell me I'm not actually a nigerian millionaire!"
I have no possible way to dispute this claim.

>> No.55998158

I am going to make a concession that GTX1080 whitepaper doesn't actually make clear how it implements the dynamic allocation, there is the potential for it to be preempt.

I would like to point out, though, that the (Tesla) GP100 whitepaper almost entirely on compute rather than graphics. This is make clear by the description of instruction level preempt only while in the GTX1080 whitepaper pixel and thread level preempt are also described, with instruction level being only for CUDA compute tasks.

>> No.55998261

>1060 has Vulkan and actually beats the 480 in Vulkan
This isn't a YLYL thread

>> No.55998852

What are you, poor?
IF you're going to get a NVidia gpu you should go for the very best- GTX 1080.

>> No.55998984

It wasn't that long ago that a certain video card came with 4 GB of VRAM, that were really 3.5GB.
What's with the amnesia that seem to come with buying green?

>> No.55999274

You got any pics of a 970 PCB with only 3.5 gb worth of vram chips?

>> No.55999437

>Not very async light
Its software side. Nvidia said so themselves.
Jesus fucking Christ, you shills are determined, aren't you?
There's a reason why TimeSpy isn't async heavy, and its because AMD would DESTROY Nvidia on it because heavy async would cripple Nvidia just as Gameworks crippled AMD.

>> No.55999514
File: 259 KB, 767x889, raresux.jpg [View same] [iqdb] [saucenao] [google] [report]

>GCN architecture

>> No.55999700

Sorry, that came out wrong. It was meant to imply that it IS lightweight on async. When I read it back, its easy to mix up though.

>> No.55999739

Well it's still a lie.
It's the same thing with current 'support' of dx12.
It's there, but it's useless.

Name (leave empty)
Comment (leave empty)
Password [?]Password used for file deletion.