[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/biz/ - Business & Finance


View post   

File: 6 KB, 285x177, ahaha.jpg [View same] [iqdb] [saucenao] [google]
24051686 No.24051686 [Reply] [Original]

https://www.trustnodes.com/2020/11/19/vitalik-buterin-ditches-sharding

AHAHA AHAHAHA AHAHAHAHAHA AHAHA

>> No.24051712

>>24051686
haha I fucking knew it. It was never going to work. I warned him, but he didn't listen.

>> No.24051724

>>24051686
>>24051712
>Instead, eth1 will be merged into the beacon chain itself, something that means effectively base sharding is now out of the picture, and with it base scalability too. Buterin says:
>“Eth1 transactions will live directly on the beacon chain instead of being in a shard and… “phase 2 (native sharded execution) is de-emphasized for the time being.”
This means only the beacon chain will have execution, not that sharding is gone. It has been known for a long time now that data availability is the goal of ETH2 and that applications will be built on a higher level ("layer 1.5", chain only for data avalability)

>> No.24051740

>>24051724
yeah this author is just a brainwit who only understands afew buzzwords

>> No.24051743

what the fuck the article author has no idea what's going on. is this a legit site or a blog hosting site?

>> No.24051759

imagine putting your faith into ethereum2 as opposed to haircomb token on the bitcoin chain

>> No.24051776

>>24051724
this basically means that there will be multiple execution engines, this part is abstracted away. it's a good idea, people can use whatever execution engine they like. if a new engine from some meme project comes along that is good, porting it requires zero intervention in ethereum itself.

>> No.24051782

>>24051724
>it's sharded
>but without sharded execution
cope more, faggot.

>> No.24051787
File: 986 KB, 1038x1530, D7.png [View same] [iqdb] [saucenao] [google]
24051787

Is it difficult to find good defi?
Run onto DYMMAX (dymmax.com), they used a referral program with Probit, wanna join. Good staking model, private sale was finished with 1mln funds.

>> No.24051794

>>24051782
see >>24051776 nigger. It means multiple execution engines.

>> No.24051802

>>24051782
Cringe

>> No.24051806

I thought Trustnodes was legit, bros. wtf are they thinking
This is embarrasing. That Vitalik comment wasn't even as encyrpted than his usual ones.

>> No.24051810

avax will be better than eth, im early hodler and proud!

>> No.24051825

Sharding more like SHARTING

lmao

>> No.24051826

>>24051794
>>24051802
Okay so how will eth 2.0 deal with scaling verifiable smart contract execution? There's only so much you can pull off-chain.

>> No.24051840

>>24051826
Read faggit. The data is on chain, thus verifiable. It's not layer 2 as we know it today.

>> No.24051861

>>24051794
>>24051802
>>24051840
Nigger, if the data is on-chain, and the chain is not sharded, then you can't scale into exponential throughput. The only way to do that is either to add trusted third parties who do stuff off-chain, or to implement sharding. Ehtereum is a clusterfuck.

>> No.24051876
File: 45 KB, 800x450, 1601967762538.jpg [View same] [iqdb] [saucenao] [google]
24051876

>>24051861
How much of a brainlet are you? Data is on the shards.

>> No.24051878

>>24051861
I was talking to vitalik and he insisted to do all that shard yanking and shit to retain atomicity of execution, I told him it wouldn't work and now he had to admit defeat.

>> No.24051891

>>24051686
Yeah. Layer 1 scalability isn't coming like most people thought. Sidechains and roll ups can do the job though.
>>24047347

>> No.24051892
File: 61 KB, 747x686, 1564987868606.png [View same] [iqdb] [saucenao] [google]
24051892

So what, back to square one for scalability?

>> No.24051896

>>24051876
Okay, what if I execute a smart contract function that will require data from every single shard? What will you do then? Yank all 1000s of shards into the beacon chain? kys.

>> No.24051902

>>24051896
merkle proofs

>> No.24051929

>>24051902
do you know how long it would take to do this? If you have data dependency between all accesses, then you have to yank all state as you try to execute the transaction. And the joke is on you if the transaction is rolled back in the end. 1000x the latency of pulling state, modifying it, and then rolling it back, all for free. I will write a bot that will permanently DOS the beacon chain, mark my words.

>> No.24051945

>>24051896
>>24051929
Point being all the data is there, it is verifiable by merkle proofs. The rest is up to the execution environment. Some will be cross shard, some will be one shard. Some will be a world of their own, some will interact. You should read up on this a bit more, it's all there for you to dig into.

>> No.24051955

>>24051929
>I will write a bot that will permanently DOS the beacon chain, mark my words.
kek I'm looking forward to it, midwit

>> No.24051977

>>24051945
I am very deep into the topic. If you have a transaction that has to access every single shard multiple times, then you have extremely high communication overhead per GAS, and if the transaction is rolled back in the end, you didn't pay anything but completely blocked the beacon miner for extreme amounts of time. And you can send hundreds of those transactions every block, they will all be unrolled so there's no cost for you. But the miners will be unable to process other transactions in the mean time.

>> No.24052000

Imagine still thinking ETH is going to be the smart contracts platform of the future.
Layer 2 = meme overengineered Reddit garbage.
FTM and AVAX looking good.

>> No.24052007
File: 7 KB, 250x228, 3049423C-D970-4478-9CFB-314E998EB85F.jpg [View same] [iqdb] [saucenao] [google]
24052007

>>24051686
Are they fucking using ChainLink for eth 2.0 or not
Someone fucking tell me

>> No.24052022

This pretty much means full POS with the ability to switch a validator node off and on + withdraw if needed... all already possible with phase 1. Which might even come wiithin a year. Crazy how fast things are kicking into high gear. The same phase 2 will eventually come as well, but it wont even be that big of an upgrade anymore compared to phase1

>> No.24052025

what the wtf no way. Real and true, just sold all my ETH omg

>> No.24052028

>>24051955
Master-slave style sharded execution requires locking the data shards during the calculation of the transaction so that you can update the state properly without any races. This means that no other smart contract can be executed during that time. And if 99% of the execution time is spent communicating and retrieving data, you're DOSed.

>> No.24052034

wallet: 0x44ab3576532a06118429bbe3f2481f4e17920f13ee8f53862df1a96e21f574cd
privKey: 0xa4c535354b181cD25E289F56c8f0C4a61B7EE84b7

>> No.24052054

>>24051977
>I am very deep into the topic
No you aren't, you makes so many wrong assumptions in every post I can't keep up. Read more. Yanking isn't happening, cross shard stuff doesn't work like you think it does.

>> No.24052098

>>24052054
I am very deep into the theory of sharding, not into what vitalik the fag is doing for ethereum. It's been a while since I talked to him, that was during the planning of eth 2.0.

>> No.24052109

>>24051686
it's a very misleading article.
What's happening is that instead of waiting for phase2 the plan is to merge eth1 much faster without computation sharding.
It doesn't mean it's not going to arrive in the future.
It's a good news, the author is a brainlet.

Thanks to the change, ethereum can realistically switch to PoS in 2022.

>> No.24052135
File: 102 KB, 320x303, 1519029249913.png [View same] [iqdb] [saucenao] [google]
24052135

>>24051686
I have no idea what this means someone tell me if this is good or bad?? I own 200 ETH btw.

>> No.24052136

>>24052098
>It's been a while since I talked to him
I can tell. I'm not going to sit here and lay it all out. I'm sure there's design documents with explanations on github. Come back with an updated thread of critiques and we can debate, even though I'm by no means an expert.

>> No.24052149
File: 32 KB, 471x400, 4faa42299ab76bbc86ecb95c74053342.jpg [View same] [iqdb] [saucenao] [google]
24052149

>>24051712
>I warned him, but he didn't listen.
wh-who is this?

>> No.24052229

>>24051861
>Nigger, if the data is on-chain, and the chain is not sharded, then you can't scale into exponential throughput.
>what are zkrollups
you're the best example of a midwit I met in a long time. Just smart enough to be able to write believable nonsense.
One of the reasons phase2 was deemphasized was because zkrollups became practical and they're infinitely superior.
Ideally, zk-snarks (or starks) would be native on ethereum, but they require powerful gpus to compute which would make node requirements very high. It's possible it happens eventually - a system where block producing is optional, but verification isn't, allowing for few heavy nodes.

>> No.24052269

>>24052229
>require powerful gpus to compute which would make node requirements very high
They are cheaply verified though, which is the point so this isn't 100% correct. The idea is that the execution engine is arbitrary, meaning zk, traditional, hybrid, something new whatever are all possible.

>> No.24052281

>>24052136
I bet whatever he came up with in the mean time is not substantially better than the garbage he was initially planning.
>>24052229
>zk rollup is superior to true sharding
keep telling that to yourself.

>> No.24052286

>>24052229
>>24052269
and also that there is no verification AT ALL (unless you have an interest in a specific execution environment) besides the data meaning MUCH higher throughput.

>> No.24052298

I’ve been an ethtard for a long time, I love it and all, but how did they fuck this up? Polka dot has fucking cross shards down, why not EF?

>> No.24052308
File: 95 KB, 1000x1000, poster,840x830,f8f8f8-pad,1000x1000,f8f8f8.u5.jpg [View same] [iqdb] [saucenao] [google]
24052308

>>24051686
JUST WENT ALL IN ON CARDANO

>> No.24052310

>>24051826
>>24051861
>>24051878
Looks like Avalanche is back on the menu boys

>>24052054
weak, address his arguments instead of resorting to ad hominems. What wrong assumptions is he making, come on, list them faggot.

>> No.24052331
File: 155 KB, 455x800, vitalik_buterin.jpg [View same] [iqdb] [saucenao] [google]
24052331

>>24052229
Exponential throughput using zk implies exponential zk generation power, which would require ludicrous amounts of GPUs.

>> No.24052384

FTM is the better tech and the only real choices. AVAX is second choice. Others are not on my radar.

>> No.24052386
File: 1.26 MB, 354x200, fiddy.gif [View same] [iqdb] [saucenao] [google]
24052386

>>24051686

the absolute state of ETH fudders

>> No.24052387

>>24051776
this

>> No.24052409

>>24052331
On-chain smart contracts will always remain the bottleneck of Ethereum, even if you can perform some state transitions using zk. Doing any complex logic in zero knowledge is extremely expensive. Even proving that a block hash is valid would probably take hours on a normal computer.

>> No.24052411

>>24052310
>merkle proofs
>arbitrary execution engine
>data availability
I have laid out the big picture. He includes shit from a preliminary design that was superseded long ago.

>> No.24052440

>>24052135
not much of a difference in fact it might be an improvement to chain security.

>> No.24052441

>>24052409
I agree with you in general fren but what do you suggest WOULD work instead?

>> No.24052481

>>24052269
>They are cheaply verified though
in the original sharding design every validator has an equal chance to make a block. If you did that with a zk-proofs node requirements would go through the roof.
That's why the short-term road is to have few heavy nodes as layer 1.5-2 that generate proofs, but everyone verifies those proofs on-chain.
Currently, those systems are going to be separate, but once full evm execution becomes practical in a zk circuit it should be introduced natively in the same way.
Non-zk phase2 doesn't really make much sense anymore.
>>24052281
>keep telling that to yourself.
It's an objective fact. A zkrollup only puts the initial state and the result on-chain. Intermediate computations and state changes are completely gone. Which means that even extremely complicated transactions that net to a simple token transfer at the end on-chain have the exact same cost as simple token transfers. That's a truly enormous advantage. Nothing else comes close.
>>24052331
>Exponential throughput using zk implies exponential zk generation power, which would require ludicrous amounts of GPUs.
Which isn't a problem because the system works well even if there's only one block producer. It can censor transactions inside the zkrollup, but then users can force a withdrawal on-chain, so it's fine.
Of course, it can be mitigated by having like 20 block producers to make it much less likely, or even a system where anyone can become a block producer after locking some collateral. Those block producers need to have powerful hardware, yes.

Direct on-chain computing requires replaying computation among every node in a shard. The problem is that to still be decentralized, each node can't be too heavy. It's fine to require a $3000+ machine for zkrollups block producing, but not fine for a general validator. The current target is to make it possible to validate on a newest raspberry pi. Those pis are still capable of cheaply verifying zk-proofs - that's the magic.

>> No.24052515

>>24052481
>Which isn't a problem because the system works well even if there's only one block producer.
yeah let's just throw decentralization out of the window altogether, no one cares about that anyway, right?

>> No.24052527

>>24052481
>in the original sharding design every validator has an equal chance to make a block
Sorry I misinterpreted what you were saying, you're right. I was thinking "block producers simply dumping proofs on chain" not "block producer computes the proof"

>> No.24052535

>>24052409
The generation of zero knowledge proofs will make operating a performant L2 node extremely expensive and probably drive off-chain tx fees through the roof if you want any good latency.
>>24052441
State+Transaction sharding. Make a ticket system to communicate between shards, split multi-shard execution into multiple phases, introduce parallel programming techniques such as mutexes. Any code that is executed costs GAS, even if the transaction reverts, because if the first part of a transaction on one shard is valid, it will have to mine it and create a ticket which will continue the execution on shard 2. If a transaction is rejected, you lose your money, otherwise, you'd have to implement asynchronous roll-back of actions going back multiple blocks.

>> No.24052573

>>24052409
>Doing any complex logic in zero knowledge is extremely expensive. Even proving that a block hash is valid would probably take hours on a normal computer.
That's the state of the art in 2016, not today. Performance increased by orders of magnitude thanks to recursive proofs and other improvements.
https://medium.com/aztec-protocol/aztec-zkrollup-layer-2-privacy-1978e90ee3b6
https://medium.com/matter-labs/curve-zksync-l2-ethereums-first-user-defined-zk-rollup-smart-contract-5a72c496b350 - you can check it on rinkeby testnet, https://zksync.curve.fi
https://medium.com/starkware/hello-cairo-3cb43b13b209
>>24052515
The point of decentralization is ensure no one has unilateral power to mutate the state or to censor. That's secured by cryptography in zk-rollup. Which means the only possible drawback from only having one block producer is availability, but that's fixed by allowing withdrawals on-chain. Which means even one block producer can't steal anything.

>> No.24052577

>>24052535
I'm out because I have to do some work, but again, I'm really looking forward to you DoS bot kek

>> No.24052597
File: 280 KB, 646x595, 1517967474541.png [View same] [iqdb] [saucenao] [google]
24052597

>>24052440
ok thanks I'll keep holding.

>> No.24052602
File: 20 KB, 378x378, 1580630128807.jpg [View same] [iqdb] [saucenao] [google]
24052602

Anyone with any competency already knew that the later stages of ETH2.0 were going to most likely be scrapped. Layer 2 solutions are already about to go live for generalized application use via Optimism by end of year, and ZK Rollups will also support generalization of application scaling by end of next year. Optimism is the stepping stone and lifesaver that will carry us to generalized ZK Rollup technology which will see us at 10,000 TPS.

Sharding and base 1 layer scaling was never really important, it can be done but is un-necessarily complex and changes the rules a bit too much on dapp composability.

Trust Money Skelly. This was always part of the plan. We no longer need later phases of ETH2..0 because scaling is happening right fucking now on ETH1.0, all that's truly needed is Proof of Stake to get away from toxic Proof of Work mining.

>> No.24052608

>>24052515
was going to say the same thing. Ethereum is trading all ideals that were behind the cryptocurrency movement for some clusterfuck frankenstein abomination.
In the end, Ethereum will all come down to plasma chains, where the operator will execute smart contracts, but proving generic EVM execution in ZK is too expensive to be feasible, and thus, I am quite sure that most off-chain transactions will be unprovable and based on trusting the plasma operators. That, or "trusted hardware" lmao.

>> No.24052618

>>24051724
cope

>> No.24052632

>>24052577
Yeah I'll try to do it if I find the time once they release that stuff. It would probably only cost me the creation fees of 1 smart contract per shard, and from then on, nothing.

>> No.24052642

>>24052007
LINK being on its own chain is a meme, it's never happening.
Link, just like XRP is a scam marketing ploy. They cashout and pay corporates to partner up with them, which pumps the price more.

>> No.24052645

>>24052573
the point of decentralization is much, much more than that. One block producer? The system you're describing is disgraceful compared to what SN suggested, laughable defense of zk-rollups.

>> No.24052667

>>24052608
Do you have any thoughts on Avalanche?

>> No.24052684

Brainlet here, what dose this mean. should I hold or sell

>> No.24052686

>>24052632
Shorting before you do this will make you rich beyond your dreams, so make sure to find the time.

>> No.24052693

>>24052135
it means eth continues to be a badly designed, hastily constructed clusterfuck, yet it will continue to live on as dominant "php for smartcontracts" platform.

however, there is no reason why this shitcoin will "moon" - it will spend its life bouncing around $500-$600 forever.

>> No.24052748

>>24052054
You keep mentioning that you talked to vitalik as if youre expecting us all to be impressed. No one cares lmfao, literally boomer mentality.

>> No.24052760

>>24052748
wrong person I never said that

>> No.24052768

One value of this thread is that it confirms the fact that you need 130+ iq to truly understand what's coming. Others will only understand once they see it in action. Eth is literally in the smart money phase.
>>24052608
>In the end, Ethereum will all come down to plasma chains
Just stop I can't take your midwit takes any more.
Nobody is going to use plasma it's a dead concept.
>>24052645
>the point of decentralization is much, much more than that
uh huh. List some then.

>> No.24052769
File: 1.60 MB, 2554x1446, 4.png [View same] [iqdb] [saucenao] [google]
24052769

Send some reviews about crypto incubator duckdao.io

Has anyone following it in it? What do you think about their Duck Card game with investment options?

>> No.24052771

>>24052573
>ZK will prevent censorship
No, it won't and it can't. ZK will prevent fraudulent computation, but you can just ditch any computations you want and not include them in the proof, and then you have censored someone. It's so easy. And if the plasma operator is the only party who is in full possession of all state data (which will most likely be the case), then no one else can even generate proofs or calculate new states, because the plasma operator will only provide zk proofs that the state was correctly updated and maybe the state root, but that's not enough for a third party to also produce states. The plasma operator cancensor everyone.
>>24052686
Yeah I am bad at trading, so I'll probably have to resort to this. I'll make a post in advance here when I'll start the bot, so that you guys can also get out in time.
>>24052602
>toxic Proof of Work mining
You're brainwashed if you think that PoShit is in any way better than PoW.
>inb4 climate change
lmao

>> No.24052812
File: 168 KB, 929x1175, 1605760327714.jpg [View same] [iqdb] [saucenao] [google]
24052812

Current Marketcap: $90,433,541

>> No.24052840

>>24052667
No, I never heard of it. I don't follow cryptocurrencies any more, since most of them are the same, technologically speaking. The only thing interesting to me is the algorithms involved to overcome technical limitations. And of course trying to break even again with my trading lol.
>>24052768
>>the point of decentralization is much, much more than that
>uh huh. List some then.
Central point of failure removal, censorship prevention, everyone can do his part to support the system, etc.

>> No.24052861

>>24052771
>but you can just ditch any computations you want and not include them in the proof, and then you have censored someone
That's when that someone does a withdrawal on-chain and that's it.
Also how do you even selectively censor if you don't know who people are? Aztec is going to be fully anonymous.
>And if the plasma operator is the only party who is in full possession of all state data
What plasma operator. No, zkrollups put all needed data on-chain. Every ethereum node is in possession of state data (or shard node with sharding).
>>24052840
>Central point of failure removal
Yes one node can go offline, so the best solution is to allow anyone to generate proofs after providing collateral to be slashed for being offline.

>> No.24052891

>>24052768
If there were one block producer the CIA could track him down and tell him to not process any data while they're performing a certain operation.
Or he could stop producing and short the network.
Or he could get hacked by someone shorting.

Are you really such a brainlet that you don't understand the basic tenets the blockchain movement was founded on?

>> No.24052933

>>24052840
>I don't follow cryptocurrencies any more
This explains a lot.

>> No.24052965

>>24052840
look into it. It's not another DPOS spinoff like all the other "ETH Killers". Even Vitalik admitted it was interesting. Made by IC3 co-director.

>> No.24052968

>>24052840
>No, I never heard of it.

Emin Gün Sirer:
>Cornell CS Prof has been in crypto since BEFORE BTC
>is the Number 2 in Cornells IC3 (Ari Juels is number 1)
>h-index of 46 (Juels h index is 83), nobody in crypto comes close to these 2 gigabrains (Vitalik has h index of 27 FYI)
>EGS recognized the importance of Chainlink from early on and is in close contact with the Flannel Man
>Vitalik endorses EGS and is already looking at "Athereum", a Subnet on Avalanche to solve what he cant do with ETH 2.0 https://athereum.avax.network/

Avalanche protocol:
>The third consensus protocol after Nakamotos Proof of Work and Classical Protocols
>This is not repackaged shit with minimal tinkering here and there, its a completely new family of Consensus Protocol
>4500+ REAL TPS (no bullshit account tricks, batching, or L2), sub 3 seconds finality
>already more decentralized than everything else in crypto running a Node is really easy and hardware requirements are low, anyone can do it
>basically consensus is reached by probabilistic sampling thousands of independent nodes over multiple rounds
>Resist 51% attack (need 80% network control to take over)

Full EVM support:
>Low friction of adoption for projects developing on ETH
>All Ethereum smart contract and infrastructure can work on AVAX out of the box which means all the slow DeFi running on ETH right now will switch to AVAX to speed it all up
>AVAX nodes will Validate ATH (Athereum) which makes running a node very profitable

Subnetworks:
>This is Avalanches bread and butter. Independent networks can be launched on Avalanche, with near infinite customization. Athereum is a good example of this.
>Big picture: Within one year, the best parts of the entire cryptocurrency ecosystem can be mirrored on Avalanche. Giving it all the TPS, scalability, and speed, and also interoperability with the entire ecosystem.

>> No.24053041

>>24052968
>Emin Gün Sirer
stopped reading right there
fucking turkjeet scammers

>> No.24053081

>>24052812
Ok, thanks, I'll go 75% in, with 25% link or something. I'll probably add a few grand to that depending on the price over the next weeks.
>>24052861
>anyone can generate proofs
only if they have full access to all state data. Otherwise, they can't do shit, even if they have a billion GPUs.
>>24052891
There are so many brainlets in this thread that suck the logs out of Vitaliks ass and think they know jack shit.
>hurr durr I read that medium post about merkle mountain ranges, zk-snarks, zk-starks, and optimistic roll-ups, now I know decentralised system design and cryptography.
>>24053041
>doesn't look at credentials

>> No.24053089

>>24052968
Get your fucking shitcoin out of here.

>> No.24053096
File: 38 KB, 758x644, 1595468487682.jpg [View same] [iqdb] [saucenao] [google]
24053096

>>24053081
>doesn't look at credentials
nope, I only look at race

>> No.24053182

>>24053096
Not saying that's wrong, but you also need to realize that also subhuman races can have geniuses, just as we aryans also have retards.

>> No.24053263

>>24053089
But it’s an eth thread??

>> No.24053308
File: 23 KB, 283x201, golden-chuckle.jpg [View same] [iqdb] [saucenao] [google]
24053308

>>24053263

>> No.24053399

>>24052891
another thing that's retarded with a central operator is that there is only one mining software implementation used for the whole system, so a system bug (of which there are many in ethereum) will destroy the whole network. In a decentralised network, you can have many competing software implementations, and the same bug is unlikely to be present in all implementations.

>> No.24053592

>>24053081
>only if they have full access to all state data
state data is fully on-chain.

>> No.24053702

>>24053399
>another thing that's retarded with a central operator is that there is only one mining software implementation used for the whole system, so a system bug (of which there are many in ethereum) will destroy the whole network
A bug that generates incorrect data is impossible because correctness is guaranteed by cryptography.

>> No.24053795

>>24053592
Not for plasma chains, which will handle most of the load, because on-chain transactions will still be costly. Also, you can't do turing-complete zk proofs, so only very simple logic can even be proven using ZK.
>>24053702
What if there's a bug in the ZK program? What if the implementation does not match with the ZK program? Then you'd have an error in the operator. Or what if there's a bug that lets your operator crash or freeze?

>> No.24053967

brainlet here. number go up or down?

>> No.24054009

>>24053967
up, and then further up after the release. And then I will crash it with no survivors.

>> No.24054105

>>24051825
GOTTEM

>> No.24054481

>>24053795
>Not for plasma chains
plasma chains are dead.
>Also, you can't do turing-complete zk proofs
wrong, you can with recursive proofs.
>What if there's a bug in the ZK program?
What if there's a bug in an ethereum node? At some point the base code has to work for everything to function.
Ethereum is the only crypto in existence that tries to have multiple different implementations, but it turns out the system naturally degrades to just one node due to people's individual choices.

>> No.24054586

Recursive proofs work once a system is powerful enough to verify other proof within itself.
That means you can have a separate circuit for each opcode, and then in a logarithmic fashion reduce everything to one short proof. Eg 16 opcodes -> 4 proofs that groups of 4 proofs are correct -> 1 proof that one batch of 4 proofs are correct
This allows extreme parallelization during proof generation.

It's not there yet, but it's close. Proofs are starting to get generated on fpgas. The future lies most likely in asics just for zksnark proof generation.

>> No.24054616

>>24054481
That's because Ethereum Foundation shits on the principles of the movement. They could actively fund the co-development of multiple node softwares, they have enough money to do that.

>> No.24054721

>>24054586
In theory, but you'd basically have to design a ZK implementation of the entire EVM, which I think is pretty tough to do correctly. After every instruction, the state would have to be regenerated, and the new state's merkle root would have to be inputted into the new instruction, which requires 1 state root update per instruction, and that is fucking expensive.

>> No.24054765

>>24052007
Yes. Eth 2.0 will have the same relationships with other erc20s. Its just improving Eth not deleting it.

>> No.24054855

>>24054721
even proving a single hash is already expensive in zk (implying secure hashes, not meme ECC hashes), and proving a complete state update will be even more expensive, and you'd have to do it for every computation step you're performing. This is a very memory-expensive task if parallelised, and thus not a good fit for GPUs and FPGAs.

>> No.24054889

>>24054721
>After every instruction, the state would have to be regenerated, and the new state's merkle root would have to be inputted into the new instruction, which requires 1 state root update per instruction, and that is fucking expensive
Why do you think the entire ethereum has to be regenerated per opcode? All you need is to first post all accessed state on-chain (in the fully stateless mode), then make only that state into a merkle/whatever tree, and then only update that tree per opcode, not the entire global state.
The intermediate tree can very well be "meme ecc hash" as you described them.

>> No.24054924

>>24054889
>the entire ethereum
state

Also, it doesn't necessarily have to be evm, it just has to be some turing complete vm

>> No.24054958

>>24054889
It doesn't have to be the global state, but still, updating any merkle tree once per op code is fucking expensive if this operation has to also include a ZK proof.

>> No.24055016

>>24054958
no it's not tornado.cash does it in a browser with a "meme hash" already.

>> No.24055183

>>24055016
>tornado.cash
>proof time: 10s
Also, that's just for a single transaction without any computations attached to it. Even with your meme hash it needs 10 fucking seconds to prove it.

>> No.24055232

>>24054958
https://tornado.cash/
(not a zkrollup, but has to verify a merkle tree witness, which is equivalent in computing load, as adding new element has the same complexity)
then there are two zkrollups already on mainnet:
https://loopring.io - dex
https://zksync.io - transfers (smart contracts on testnet)
so your claims it's not practical are empirically false
it's just a matter of a little more algorithmic advances + implementation advances for fully turing complete zkvm.

On a related note, smart contracts can't work without zero knowledge. That's the only way enterprise can use it. They're not going to put their critical data in public.
>>24055183
>proof time: 10s
>in a single threaded js on cpu as opposed to multithreaded cpu with assembly code, gpus or even fpgas
we are talking orders of magnitude differences in performance here

>> No.24055331

>>24055232
ok, but even if you're 100k times faster, you'll have 0.1ms for every (simple) op code executed by the EVM. If you have 1000 op codes, which can easily happen, you'll have 0.1s spent just on proving one transaction, which would limit your node to 10 tp/s. Also note that you can't really parallelise this process since you don't know in advance which data a transaction will access.
>On a related note, smart contracts can't work without zero knowledge. That's the only way enterprise can use it. They're not going to put their critical data in public.
>What is homomorphic encryption

>> No.24055380
File: 12 KB, 853x124, file.png [View same] [iqdb] [saucenao] [google]
24055380

>>24051686
thanks anon, just sold 11

>> No.24055528

>>24055331
at the lowest level zk-snarks rely on polynomial multiplication which rely on FFT, and FFT is n lg n with lg n being the parallel complexity given n processing units.
>Also note that you can't really parallelise this process since you don't know in advance which data a transaction will access.
this is wrong, you can execute all transactions first without any zkproofs to get all intermediate data.
Which means sequential performance is irrelevant, zk-snark proofs are infinitely parallelizable. That's why each step cpu single thread -> multi -> gpu -> fpga -> asic results in enormous performance gains.
>What is homomorphic encryption
really? It's orders of magnitude heavier. Like you need several minutes on a gpu just to sort 16 numbers.

>> No.24055633

>>24055380
I don't know what happens to the price in the short term, but news in the article are actually good news, the author is just an idiot

>> No.24055665

>>24052007
rop kek anon, youre unironically gmi because of the sheer retard you are

>> No.24055740
File: 9 KB, 286x278, file.png [View same] [iqdb] [saucenao] [google]
24055740

will i get filled today guys?

>> No.24055781

>>24055528
Alright, I get that you can parallelise zk proofs, but that doesn't make it faster, any way. Since computing 2 transactions at once makes both have only half the computational resources available, thus taking twice as long. And doing it the way you suggested would involve lots of state copies, which would kill your cache.

>> No.24055837

>gobble gobble sharding gobble blerg gurgle zksnarks scoff blerg gobble scaling glerp

Literally a thread full of people arguing over the particulars of a tube Goldberg machine that is useful for absolutely fucking nothing apart from hosting yet more shitcoins

STAY POOR BAGHOLDERS

>> No.24055878

damn what are all those poor african kids gonna do now. I'll have to buy extra ethereum this month so their bags aren't worthless

>> No.24055937

>>24055837
based

>> No.24055963

>>24055781
so as I wrote
>it's just a matter of a little more algorithmic advances + implementation advances for fully turing complete zkvm.
right now what's available are fully circuit per execution (one proof per contract), which are in practice going to be enough for almost everything. It just means that you can't have unbounded loops.
In addition to previous links see https://zokrates.github.io/ and https://github.com/iden3/circom

>> No.24055995

>>24055963
>fully circuit
full circuits

>> No.24056050

>>24055781
anyway there are also starks which I admit I don't know the details well, but from what I read they are way faster at proof generation at the cost of proofs being logarithmic in size of the circuit rather than constant.
https://medium.com/starkware/hello-cairo-3cb43b13b209
in theory you could do a zk-snark that verifies a logarithmic stark proof, I suppose.

>> No.24056060

Incoming exodus. Can't wait.

>> No.24056083

>>24056050
another advantage of starks is that they're quantum secure as they rely on hashes.

>> No.24056147
File: 60 KB, 775x890, 1579222646270.png [View same] [iqdb] [saucenao] [google]
24056147

>>24055781
ohh I just remembered one post
https://ethresear.ch/t/introducing-distaff-a-stark-based-vm-written-in-rust/7318
as you can see zk-vm is very close to being practical, whether on starks or zk-snarks.

>> No.24056283

how do i profit off not understanding 99% of whats discussed here

>> No.24056310

>>24056283
short eth

>> No.24056397

>>24056310
but ive been accumulating my 32 stack for a fucking year

>> No.24056413

>>24056397
put eth in margin wallet, and then short eth

>> No.24056461

>>24056283
Buy ada while it's still $0.10

>> No.24056483

>>24056283
buy the next eth killer that fail like the 200 before it, obviously

>> No.24057129

>>24056147
>proof size
any way, I'm out, I have to do some work

>> No.24057200

>>24057129
yes that's the logarithmic factor, but that means batches are practical. As you can see with each doubling of size proof size increased only by 30%.
Phase1 is going to be 1MB/s

>> No.24057282

>>24057200
>As you can see with each doubling of size proof size increased only by 30%.
uhh that's incorrect I phrased that wrong
what I meant is that each doubling increases the proof by roughly a constant amount, so 2^32 operation proof is going to be only ~32larger than one operation proof.
Which makes it practical for batches