[ 3 / biz / cgl / ck / diy / fa / g / ic / jp / lit / sci / tg / vr / vt ] [ index / top / reports / report a bug ] [ 4plebs / archived.moe / rbt ]

Due to resource constraints, /g/ and /tg/ will no longer be archived or available. Other archivers continue to archive these boards.Become a Patron!

/g/ - Technology

View post   

[ Toggle deleted replies ]
File: 292 KB, 1440x810, amd_bulldozer_2010-2.png [View same] [iqdb] [saucenao] [google] [report]
50283964 No.50283964 [Reply] [Original] [archived.moe] [rbt]

>AMD Faildozer was 6 years ago now

Has there been a bigger mistake in PC hardware ever since?

>> No.50284617

Bulldozer isn't as such a mistake as AMD's R&D into it.

The actual core design is pretty good, but the mistake AMD made was not checking how OS's do their process scheduling, which is why it performed so badly. If they would have either given out drivers which utilize all 4/6/8 threads or actually make them so that the OS thinks it's a Dual/Tri/Quad core CPU and letting the CPU distribute within the modules, they might have not been so red-faced

But in answer to your question the only fuck up I can think of is Intel skipping Broadwell for the main Desktop market and rushing Skylake out the door, the performance is seriously not worth a new motherboard and possibly new cooler and RAM.

>> No.50284772

OS's process scheduling is not the cause of the bad performance on bulldozer, Windows 7 got patched short after release and the improvements were negligible.

>> No.50284820

>The actual core design is pretty good
Kek. I guess that's why AMD is dominating the high end cpu segment right now.

>> No.50285128

Shit I'm still running a phenom 955,what do I upgrade to?

>> No.50285207

Why are amd shills so obvious?

>defends bulldozer
>turns around and shits on Intel when they're several years ahead of amd
>complains about the newest platform supporting new industry standards

>> No.50285239

The whole BD family pretty much takes the cake.

There were quite literally hundreds of hardware issues in Bulldozer that killed its potential performance. It was intended originally to be AMD's highest ever serial performance arch.
Pretty much every area of the core needed tons of work, and a ton of issues still remain unfixed in Excavator.

>> No.50285349

>The actual core design is pretty good

>If they would have either given out drivers which utilize all 4/6/8 threads or actually make them so that the OS thinks it's a Dual/Tri/Quad core CPU and letting the CPU distribute within the modules

they did exactly this. the original idea was to run 2 threads per module to be able to disable the resting ones and let the turbo go higher. after they realized that the shared FPU and decoder is massive trash they did an emergency switch to try to run 1 thread per module to not clog the front-end and FPU. (how the fuck didnt they notice that earlier? AMD had huge FPU problems till the K7 design so not like they didnt have experience with similiar problems...)

>> No.50285370



>> No.50285371
File: 61 KB, 700x367, cs.jpg [View same] [iqdb] [saucenao] [google] [report]

Pretty much any hyped architecture or feature from them since or before has been the same.

>> No.50285422

No, it happens only once in a decade.

The core design would have been ok indeed, if the damned thing had climbed to 6 or 7ghz.
And by then then it would have been limited by the utterly crappy cache and memory controller performance.

Either wait for Zen or buy Intel.

>> No.50285534
File: 67 KB, 930x405, Screenshot_2015-09-12-18-00-53-1.png [View same] [iqdb] [saucenao] [google] [report]


You intelcucks get higher TDPs, lower single thread performance, and not even the 5% performance improvement people kept saying you would get.

>> No.50285559

What the fuck.

>> No.50285578

every architecture that came after sandy bridge

>> No.50285599


Bulldozer wasn't a mistake, AMD mismanaged the fuck out of it.

If they had released steamroller and excavator and continued making gains like they did with vishera we would've had CPUs nearly on par with intel, vishera gets pretty close to intel perf once overclocked heavily as well:


>> No.50285661


To add to this, you don't get a good architecture in one fell swoop. The original bulldozer WAS trash, but the issues were being ironed out until AMD lost all their market share and money from the huge backlash it created. An efficient architecture takes years of work to mature, Intel has been tweaking and building on core/core2 for almost 10 years now.

>> No.50285795
File: 89 KB, 960x540, Screenshot_2015-09-14-17-37-34.png [View same] [iqdb] [saucenao] [google] [report]

>tfw an fx-8370e has almost the same TDP and multi-core performance as the skylake i5-6600K.

U u
C c
K k
E e
D d

>> No.50285819


4 years ago you dumbarse.

>> No.50285847

>8 half-cores almost reaching the performance of 4 normal cores in multithreaded workloads

>> No.50285866

This doesn't make any sense. Haswell is 22nm and skylake is 14nm. Why didn't the die shrink reduce TDP? I'm really confused.

>> No.50285889

Larger IGP, you idiot.

>> No.50285930

larger iGPU and the memory control has to handle DDR4 and DDR3 which will require more power obviously.

>> No.50285941

Explain the shitty TDP of the skylake chip. Why would a 32nm space heater only need 4 more watts of TDP to reach 99% of the multi-core performance of your 14nm meme?

I thought the i5 skylake chips like the i5-6600K were going to be like 60W of TDP.

>> No.50285992

I don't have to explain shit to you. Enjoy your poorfag AMD CPU.

>> No.50286002

Stop trolling you dipshits. The TDPs tripfag posted are for the CPU part not the cpu + igpu.

>> No.50286037

>tech illiterate retard

There is only one TDP figure, and it is for the entire chip. CPU logic, media accerlators, everything. It is a measure of how much heat has to be pulled away from the entire chip to keep it operating within its acceptable temp range.

>> No.50286048

IGP on skylake is way better, but the desktop versions lost the on chip L4 cache. That's why performance sucks compared to previous chips like 4690K.

Maybe they'll return that L4 cache one day. I won't be buying Intel before they do. Or AMD for that matter.

>> No.50286067
File: 343 KB, 512x512, hltM9GD.png [View same] [iqdb] [saucenao] [google] [report]

Of course you don't intelcuck. Can't wait for you to suck intel's cock when they release kaby lake with 0.001% better IPC and 50% higher TDPs.

>> No.50286087

>also has no idea what hes talking about

Only Iris Pro equipped parts have on die eDRAM. The Devil's Canyon Haswell chips were not among them.

>> No.50286137

We can only hope that Zen doesn't go the same way.

>> No.50286138

its called an igpu because it's INTEGRATED you fuck head, that means it's part of the CPU die, and any TDP is going to include the GPU as well.

So like I said, the increase in TDP is explained by the new memory controller which does DDR3 and DDR4 and the larger iGPU.

>> No.50286142

Yes, FX-8xxx

>> No.50286149
File: 131 KB, 601x1394, cinebench-single-thread.png [View same] [iqdb] [saucenao] [google] [report]

>muh cherrypicked benchmarks from some shitty sales site

vomit_chan you're clearly being an idiot today in every thread I've seen you day.

>bigger iGPU -> 3W more TDP

>> No.50286203

>Complain about cherry picking
>Do EXACTLY that by picking a single thread bench as counter-point

>> No.50286214

>AMD has high TDP

>Intel's new chip is high TDP

>> No.50286241

He's a loyal goy, which is more than can be said about you, antisemite!

>> No.50286256


dont need to cherry pick buddy

>> No.50286272

>not filtering trip

Come on.

>> No.50286287
File: 448 KB, 455x395, 1412461962011.png [View same] [iqdb] [saucenao] [google] [report]

So it's okay to make fun of AMD for their TDPs but it's "insignificant" when intel does it?

>> No.50286300

Is the 6700 like what 2% faster than a 4970k. Why did the die shrink do nothinv

>> No.50286319

Holy shit. I just realized that I've been on /g/ for over 6 years. I need a hobby.

>> No.50286320

because they used the die shrink to essentially throw in DDR4 memory controller and a larger iGPU on for roughly the same TDP.

>> No.50286322

>Has there been a bigger mistake in PC hardware ever since?
Nvidia 3.5GB +512MB
Fermi housefire woodscrews
Nvidia not having DX12 in their newest cards

Snapdragon 810

>> No.50286325

>filtering a trip because he's right
the butthurt levels in here are insane

>> No.50286327

Thanks anon.

>> No.50286357
File: 14 KB, 249x228, 1436641056588.jpg [View same] [iqdb] [saucenao] [google] [report]

Fermis never had wooden screws. Take your AMD shilling elsewhere.

>> No.50286393
File: 21 KB, 500x169, Fermi_end_plate_cropped.jpg [View same] [iqdb] [saucenao] [google] [report]


>> No.50286404

>screw heads make a screw inherently different

fuck off amd shill

>> No.50286408
File: 30 KB, 361x480, Nvidia-Fermi-G300.jpg [View same] [iqdb] [saucenao] [google] [report]


>> No.50286426

>Has there been a bigger mistake in PC hardware ever since?

Pentium 4 scaling to 10GHz.

>> No.50286444
File: 36 KB, 588x525, 1430020957943.jpg [View same] [iqdb] [saucenao] [google] [report]

>mfw I thought the fermi woodscrew thing was just a news article from the onion but found out it was a legit story

>> No.50286462

I am using a 8320,what do I upgrade to?

>> No.50286469


>Then there are the screws. Notice that the three screws that hold the end plate on are, well, generic wood screws. Large flat head phillips screws. Home Depot grade screws that don’t even sit flush. If a card is real, you hold it on with the bolts on either side of the DVI connector. Go look at any GPU you have, do you see wood screws that don’t mount flush or DVI flanking bolts?

>> No.50286471


Those aren't wood screws.

>> No.50286474

an i5 if you want more single core performance. An i7 if you want more single core AND more multi core performance.

>> No.50286480

>Pretty much any hyped architecture or feature from them since or before has been the same.

Like x64.

>> No.50286484

Can you read the related post:

>lower single thread performance

So now guess why I came up with a single thread bench. Also just google up 6600k vs 4690k and see the difference between them.


Bigger/better iGPU -> more power -> more TDP

Disable it and guess what?

Thing is that AMD's processors have ridiculous high TDPs for relative shit performance compared to Intel.

I'd rather get cucked by Intel for having actual performance with slow IPC increase instead of staying with AMD no-progress-at-all CPUs (GPUs are fine; I even have an AMD GPU) with shitty IPC but high TDP.

Imagine NVIDIA making x86/AMD64 CPUs. That would be fun!

Intel or believe in the Zen dream.

>> No.50286501

You're about 6 years too late to this discussion mate.
They're wood screws.

>> No.50286502

>It's phillips
>must be a woodscrew


>wood screw
>Home Depot
>not selling flush sitting screws


Why don't you actually go to home depot once in your life and realize how dumb you are

>> No.50286514


>> No.50286515

You don't. The 8320 is perfectly fine and unless you break yours it has plenty of life left.

By the time you actually need an upgrade you can reexamine the market and decide.

>> No.50286518

Why don't you read your own posts once in your life and realize how dumb you are.

>> No.50286520
File: 27 KB, 399x397, cpuz4.4.png [View same] [iqdb] [saucenao] [google] [report]

I wonder if Zen actually will bring single core performance on par with intel, I doubt it unfortunatly.

I was going to buy an i5 2500k and intel board and use the rest of my parts, and probably be happy with my rig for 4 years, but decided to just buy an Athlon 750k to replace my a6 dual core. I'm actually ok with its current performance, esp at 4.4Ghz.

Will save up for a nicer rig for late 2016

>> No.50286531

Thanks for proving AMD Shills are basement dwellers that don't even come outside.

>> No.50286548

>single core performance

/v/ truly lives in the past.

>> No.50286559

>implying single core performance is irrelevant in 2015

>> No.50286578

Nice strawman response.

I didn't say more cores isn't and wont become even more important in the future.

Its a fact amd single core atm is slower then intels and I don't see how 2015 is the past

>> No.50286595

>single thread

>> No.50286596

Quite frankly, I personally, and quite strongly believe in the Zen dream.

They have the Talent (Keller) and they have a substantial shrink in production process.

AMD have said themselves that they set themselves a target of 40% IPC improvement over Excavator, which is only 25% behind Haswell.
With going from 32nm all the way down to 16nm, you can naturally expect some TDP improvement.
Then cycle into the fact that they have one of the biggest names in the CPU industry heading the design teams, and an Engineer CEO (Dr Lisa Su), and you can be sure that those teams are going to get the funding they need to push out a quality product.

Hell, last time Keller was at AMD, we got the Athlon 64.

When he was at Apple, he was constantly wrecking Intel's shit, notably so with the A8.

>> No.50286613

>strongly believe in the Zen dream
I really wish I could I REALLY do, but I have been waiting for 2 generations now and both have flopped hard, I dont know if I can keep waiting.

>> No.50286624

Hey I'm all for AMD wearing the crown for a while. Cant wait till it comes out.

>> No.50286627
File: 956 KB, 852x462, aTAxcBT.webm [View same] [iqdb] [saucenao] [google] [report]

Depends on how much of an upgrade you are looking for. First get a water cooler and try to push your 8320 to over 4 GHz. If that is not enough or your chip can't push 4 GHz even with 1.4V then you will have to switch over to intel as AMD currently does not offer anything significantly better than an OC'd 8320 @ over 4 GHz.

If that is so then the i7-5930K (~$600) is a good start. It will give you a ~40% more performance of a stock 8320 and even more once you OC it.

If the upgrade price is too steep for you then the only other option is to wait for AMD to deliver the Zen CPUs.

>> No.50286634

You seem to be able to gain an awful lot of information from someone just from a post online.

Or are you just projecting


Or is this just diverting from the topic you so helplessly failed at with your attempts of Damage control, for free, for a company that owes you nothing.
Unless, it wasn't for free. In which case, in your attempts to build a better PR image for nVidia, I suggest you try focusing on the technologies they have acquired- I mean, developed, and how they have found such strong places in varies professional work environments.

>> No.50286656

Go play Garry's Mod and say that. The game uses one core, so advanced gamemodes suffer due to the clogged rendering pipeline.

>> No.50286671

>taking things out-of-context
>posting without reading the corresponding answer posts.

You should read:

Hopefully. Finally want to see AMD back in the game and having actual competition to Intel so that we can get away with getting cucked.

Why upgrading to a $600 CPU? I mean I don't think Zen is going to surpass Intel but rather catch-up with the current generation, so might as well upgrade to a 6700k or something like that.

>> No.50286682

For the past 2 generations there have been notable fixes and problems addressed on AMD's part.
Compare a new Excavator part, to a Zambezi part, and tell me nothing has changed.

But yes, they have put far too much effort into this arch, and too much money. They could have floated on Zambezi for atleast a year more than they did, and they could have pushed out an impressive refresh.

I lost hope for AMD's CPU department years ago, however, I still have a lot of faith in Keller's abilities, and that's what the industry is relying on in order to move past this decade of stagnation.

>> No.50286689

>go play old shit and say that
Go back to Tibia.

>> No.50286710

A10- 6700k here.

No problems, except for when some asshat sets of a bunch of HAMR's inside of a several hundred part build.
In which case, all machines will suffer.

While Garry's Mod can be called "Intensive" due to the added load of so much physics being calculated, it is still Source Engine, which will literally run on a toaster.

>> No.50286719

Intel's i740 GPU
Intel's i432 CPU
Intel's i860 CPU
Intel's Larrabee, though that actually became a product that still exists

>> No.50286742

none of those Nvidia ones are actually significant or "mistakes" even, if you believe what is generally said.

CPU fuck ups are far more significant than niche dedicated gpu market.

Snapdragon 810 is a huge fuck up, AMD bulldozer, Intel insignificant gains since the sandy vag. These are massive fuck ups.

>> No.50286785

Hitler believed in the bulldozer dream, now look at him.

>> No.50286800

>Create a defective-by-design GPU
>It's not a mistake

>nVidia showing up to a press-conference with a poorly put-together "prototype" and making a laughing stock of themselves, saying very different things to everyone who asked them about it
>Not a mistake

>Neither of these significant
I'm sorry. No. They are both big mistakes.

>> No.50286808

Fermi was a massive mistake, almost as bad as the FX 5800
Fermi was all kinds of bad it's just retarded, from woodscrews to housefires to 1.7%
Even meme aside, it didn't had the performance it promised to have
But still, it wasn't as bad s the FX 5800, or the FX series in general

>> No.50286811
File: 49 KB, 500x598, 1440113933411.jpg [View same] [iqdb] [saucenao] [google] [report]

>Why upgrading to a $600 CPU? I mean I don't think Zen is going to surpass Intel but rather catch-up with the current generation, so might as well upgrade to a 6700k or something like that.
Because single-core performance is starting to matter less and less. With DX12 games coming up it won't matter if you have a shitty $160 8350 because it will be able to use all the 8 cores efficiently. Also applications like hi10p h264 encoding rely on multi-core performance not on single-core performance.

So because of that upgrading is only worth it if you can get better multi-core performance. The i7-5930K really is a good CPU, OC it can give you 50-70% better multi-core performance than a stock 8320.

>> No.50286845

>Rebuilt German economy
>Zerg-rushed most of Europe and won
>Took a huge piece of Africa
>Took on half of the world and was winning

While he did throw it all away, you have to give it to him. He was a fucking genius to be able to fix the mess that was Germany after WW1.

>> No.50286876

If it's by design, it's not a mistake that's the literal definition of mistake. Something you didn't mean to do.

None of those resulted in Nvidia losing sales either. You can't even compare when the 970 sold like hot cakes and still sells while no one wants to touch the 810, bulldozer or ivy/skylake chips.

They still sell. Only /g/ autists like us even know about this shit, well us and tech journals.

>> No.50286877

Nvidia's mobile GPU's failure plague

>> No.50286894

Tegra CPUs

>> No.50286931

this tbh

>> No.50287089



heh never seen nvidia cpu before

are they shit?

>> No.50287139


>> No.50287183

The Nvidia FX 5 Series

>> No.50287197

>were you not around for the haslel debacle in 2013?

theres no way I was putting anything with a 4xxx in my desktop.

Skylake is serving me well considering I never bought a haslel piece of shit

>> No.50287232


skylake is just as bad as haslel, the IHS is still glued and not soldered.

>> No.50287264
File: 123 KB, 1024x768, 1441082561319.jpg [View same] [iqdb] [saucenao] [google] [report]

I heard skylake chips have worse OC'ing compared to Haswell chips. Have you tried overclocking yours yet? If so what Voltage did you have to use?

>> No.50287281

Qualcomm going 64-bit early.

>> No.50287305
File: 71 KB, 600x400, 1437779938215.jpg [View same] [iqdb] [saucenao] [google] [report]

>glued IHS
That shit is still present on skylake chips? LMAO.

>> No.50287355

>moving to glue to save $.65 per chip fab
top israel.

pretty sure AMD is going to do the same thing for their non-black edition AM4 processors, though :(

>> No.50287361

>my nehalem build still maxes every game I play at 1440p with a 290

I don't understand why 99% of people would ever upgrade beyond nahalem/westmere, or sandy/ivy bridge for the pcie bandwidth, not even the CPU speeds.

Bulldozer is pointless as well as a phenom ii is fast enough and has lasted just long enough for people to upgrade to zen.

If you bought haswell or newer for vidya you're fucking gullible.

>> No.50287365

jewtel is just one fuck up after another at this point. At least we can expect improvement with Zen chips.

I have a haswell i5 and I won't be upgrading even when they release kaby lake because I am actually expecting it to be shit.

>> No.50287381

>tfw broadwell i5-5675c cucks the fuck out of i7-6700k in minimum/avg FPS with good video card


why did intel cut broadwell short again

>> No.50287383

i want jewtel because i never had one before

only amd phenom x2 and amd fx 6300 ive owned

>> No.50287392

well if you get an intel i5 or better since sandy bridge you're set for years, maybe decades if intel eats shit

>> No.50287413

well, AMD eating shit but intel eating their shit as well as a result.

>> No.50287422


yup, it's only soldered on haswell-e and probably broadwell-e and skylake-e, whenever those actually launch

>> No.50287425

Haven't bothered, i'll start OCing when I dip below 60 FPS in a game.

I'm at 4.0 stock

>> No.50287548

>5% higher IPC than Haswell
>literally less than Skylake

>> No.50287571

>Excavator 25% less than Haswell
>Zen expected to be 40% over Excavator
>Somehow this works out that Zen will 5% greater IPC than Haswell

Math is hard

>> No.50287591

I refuse to believe anyone could be as retarded as you.

So, here's a reply. Good b8

>> No.50287620

what's 0.75 * 1.4?

>> No.50287637

Nice damage control mate.

So. Tell me, how does it work out, explain to me just how Excavator; 25% less than Haswell. Zen is expected to be 40% over Excavator. So just how DOES this mean that Zen is 5% over Haswell, while, with some basic mathematics, we can deduce a 15% expected improvement over Haswell

>> No.50287652

>with some basic mathematics, we can deduce a 15% expected improvement over Haswell
Only if you're literally clinically retarded

>> No.50287753
File: 2.27 MB, 3264x2448, IMG_20150914_224241.jpg [View same] [iqdb] [saucenao] [google] [report]


>still on lynnfield
>not having to pay a premium to overclock

>> No.50287775

>implying the kids in here had money or were old enough to build a computer when Nehalem was still being sold.

>> No.50287791
File: 36 KB, 417x414, sandyvag.png [View same] [iqdb] [saucenao] [google] [report]

Between this and an overclocked 390, I'm pretty good until Zen and 14nm GPUs come out.

>> No.50287892

Is Opteron any gud?

>> No.50287913

Depends, are you making a web server that serves millions of users?

Because that's all it's good for. RIP gaming opterons 2001-2009

>> No.50287926

A good. Low-budget alternative to Xeons.

Greater core count for less cost. Good if your workload makes use of thread count over core strength

>> No.50288010


Skylake isn't actually worse than Broadwell, it's just only negligibly better.
The bigger fuckup is that Kaby Lake will be another Tock and the 10nm shrink won't be for another 2 years.

>> No.50288017


No. Even a 5820k would be better than a 16 core opteron, the opterons are clocked too low to perform well, you need to ideally be at 4ghz on bulldozer regardless of core count to get good ipc.

>> No.50288081


Bulldozer is a fine server/workstation CPU. It rocks for VM-related stuff.

The problem is that mainstream application only use four threads at most. Most of Bulldozer's advantages go unnoticed. AMD doesn't have Intel's fab tech so will always lose on power efficiency and yielding front.

Basically, Bulldozer was meant to go up against Bloomfield/Gulftown but delays forced it to go up against Sandy Bridge.

People keep forgetting that Intel's primary advantage has always been fab tech. It allow them to weather the shitstorm known as Netburst.

>> No.50288119


The biggest mistake in x86 front is probably Intel's exclusive deal with RDRAM and forcing it on Pentium 3 which was unable to take advantage of it. It only increased latency and drove-up platform costs.

>> No.50288164


Fermi was an fine architecture. The problem was that GF100 was meant to be a GPGPU at heart. Nvidia had overclocked the bloody thing to make it good at gaming performance. Not many chips could handle the clockspeeds needed to make competitive against HD 5870.

Nvidia tweaked the GF1xx core to make it more suitable for gaming which is why GF104 and GF110 were far superior. 580 FX held the crown until 680 and 7970 came out.

>> No.50288255

I remember being terribly disappointed with my FX 5200 to the point of going back to my 4200 ti AGP 8x

>> No.50288294

I miss my 3570k. It was a workhorse.

>> No.50288321

GF4Ti series was actually decent, whereas GFFX just got completely raped by the ATi R300.

> 9700 Pro mustard race reporting

>> No.50288324

Bulldozer was/is cheap as fuck.
My FX4100 was enough for my last 4 years, until my i5 upgrade.

>> No.50288325

This. Memedozer's power draw didn't even do that bad against Sandy Bridge. It was Ivy where the differences started to get significant, along with the inferior Single-thread performance - which Piledriver couldn't fix - it became less and less attractive for the consumer space, specifically gaymin.

Bulldozer could have been something, but It came too late, the manufacturing wasn't good enough and the software support has been poor. Piledriver fixed some issues, but at this point, Intel already went too far ahead.

Hopefully Zen is gonna fix it.
Keller is our last hope.

Fermi was "fine" in ways Bulldozer was: Good in theory, bad in execution. Unlike AMD however, Nvidia had the R&D to fix things in time.

>> No.50288339

9700 Pro/9800 Pro were the magical 5 year cards.
79507970 is also shaping up to be one of the magical 5 year cards as well :^)

>> No.50288369

>Fermi was "fine" in ways Bulldozer was: Good in theory, bad in execution. Unlike AMD however, Nvidia had the R&D to fix things in time.

that's bullshit. both Fermi and Bulldozer were good solutions to the wrong problems, so "good in theory" is overgenerous.

> selling hot enterprise GPGPU to gamers
> shared frond-end CPU optimized for identical server processes being sold to anybody except server farm operators

>> No.50288397

Pascal and/or Fury2/R9 400 have a shot at this too.
If anybody thought the holdup on 28nm was long, they'll be looking back fondly in 6 years when we're still stuck on 14/16 nm.

>> No.50288407

>good solutions to the wrong problems

I guess that's a better way to describe it. :^)

>> No.50288408

that is what I expect as well, its gonna be a long couple of years.

>> No.50288447

nvidia stopped with the 5 year card shit ever since it cut their margins in the gtx 200 series, don't expect anything like that from them without them "forgetting" to put in features like preemption and async

it's part of why nvidia has wacky marketshare in discrete gaming gpus

>> No.50288507

nah, I doubt their fuckup on async/preemption was planned.
their preferred tactic in the last few years (driver sabotage aside) has been to introduce new fixed-function hardware (e.g., big tessellators) then using shady tactics to get devs to overly rely on it (Crysis 2 water, HairWorks, etc).

>> No.50288545

He got played by the Russians the entire time
He expanded their border of control much too far, much too fast
He stretched their manpower and resources too thin to maintain stability
Bad management of the military sent a large amount of forces to die in Russian territory due to weather conditions which made movement and supply nearly impossible.

I'll give you "rebuilt the economy", as getting a nation of broken down people to believe in the value of a new currency must have been a tremendous feat.

>Took on half the world and was winning
It was slowly crumbling from 1943 onwards to the end. Refer to the above reasons.
Russia was the wildcard as well, who ultimately betrayed everyone for their (failed) gain. Stalin was biding his time to try and take a large majority of Europe after germany ran outta gas and the other countries were weakened.

>> No.50288644

krauts probably would have had even odds of crushing russia if they didn't have to delay their invasion to deal with bailing Italy out of their fuckup in Greece in early '41.

>> No.50288659

Hitler's main military downfall was also his greatest tactic. Blitzkreig.

To Blitz a country like Poland or France, and catch them completely off guard will force a quick surrender, and government cooperation makes occupation of strong points much easier than blowing them to hell and making a mess of everything.

However, when you try to push 500 miles into Russia without properly establishing supply lines, and expecting such patriotic people to crumble as easy as the French did. Well, that's naive to say the least.

And then ofcourse we have Hitler's approach to armoured warfare. Bigger being better. Rather than striking the balance between volume and quality, he just wanted bigger.

Should the Germans have focused their efforts on constructing greater number of Pz.IV tanks as apposed to creating monstrosities like the Maus, then maybe they could have slowed or even halted the Russian advance.

Hind sight is of course. 20/20. It's more than easy for us to say this now, but to put yourself in his cocaine-drop-driven shoes, it's easy to see why he believed in tanks like the Tiger II, Maus, and arguably Panther.

>> No.50288788

No amount of time can make up for inadequately equipped troops.

Even at the best of times, Russia is a shithole to slug through unless you're on the main roads. Which, you tend to avoid in war, because, you know, obvious approach routes, ambushes, etc, etc.

If Hitler wanted to stand a chance in Russia, then move in during the Winter, when the mud doesn't matter, tanks traverse snow in Russia as well as they do snow in Germany, however, mud is a bit less consistent.
He should have properly equipped the troops with true winterized gear, and moved slow, setting up secure supply lines, and moving from fortified position to fortified position, and just enduring the constant head-on charges of the russians. Fight the war of attrition as it were.

>> No.50289036

Intel Atom having proprietary drivers only.

>> No.50289070

They are self tapping screws made for wood.

>> No.50289171

They function fine and are some of the best bang for your buck CPUs around

>> No.50289335

this, they do compete at some levels. FX isn't a killer CPU like before, but its not the "biggest mistake in PC history."

Its AMD tho. They have literally no brand or advertising. They're performance / value oriented. They can't compete post iPad / a technology industry past complete consumerization.

>> No.50289370

I think they're still the best CPUs overall for virtualization

>> No.50289374

Yes, Netburst.


>> No.50289435


Yeah, because you can get vcpus for pretty much half the cost compared to an intel build, and FX chips come with amd-vi while intel's vt-d is only in $300+ i7s.

>> No.50289800

well, those where new technologies while x86 is old as the universe.

Name (leave empty)
Comment (leave empty)
Password [?]Password used for file deletion.