There's a principle in distributed systems that you can't really count on clocks to be synchronized in a very large system but the thing about Parallel Sysplex is that it is not particularly scalable, it maxes out at 32 nodes but those nodes are pretty big -- the system overall is big enough for most of what the Fortune 500 does but tiny compared to Google, Facebook or a handful of really big systems. Sysplex revolves around distributed data structures similar to what Hazelcast provided in the beginning.
True, but you could make a warehouse of sysplexes work together using the same mechanisms we use for warehouses of generic servers, but, if each system takes four racks, and one sysplex takes 128 racks, it’ll be thousands of times fewer systems to be coordinated.
All that would remain is an eye-watering hardware and licensing bill.
The HPC folks around me broke hard for "performance/price is the main thing" circa 2000 or so once scalable systems became feasible. The "counter" if it is one is that you might want a high-end (Infiband) or specialized (all the stuff in BlueGene) communications framework.
Given that, having to manage two layers of parallelism to maximize some super-expensive hardware seems like a non-starter whereas I think the appeal of zArchitecture is that you can use a set of well-developed tools and frameworks like DB2 and CICS to build a certain sort of application -- the early motivation for Sysplex was that IBM had to make a transition from bipolar to CMOS transistors and the first CMOS mainframes could not equal the performance of the biggest bipolar mainframes so they needed to get N CMOS mainframes to do the job of one bipolar where N is a small number.
The vision I do get out of this idea is some kind of system that has a very smart compiler that looks at things in a fractal manner, that is it knows you can apply SIMD to a calculation and then you can apply SMP to it, and then you can apply clustering techniques and who knows, a "cluster of cluster" might make sense for geographically distributed situations. I think of
but not so much the algorithm being oblivious but rather the compiler very much models opportunities to parallelism and the costs of moving data around so the applications developer can be "oblivious" about it all.
I don't think there is a problem where a warehouse of networked z17 sysplexes would be a cost-effective solution, but, at the very least, it'd be incredibly cool.
There's a principle in distributed systems that you can't really count on clocks to be synchronized in a very large system but the thing about Parallel Sysplex is that it is not particularly scalable, it maxes out at 32 nodes but those nodes are pretty big -- the system overall is big enough for most of what the Fortune 500 does but tiny compared to Google, Facebook or a handful of really big systems. Sysplex revolves around distributed data structures similar to what Hazelcast provided in the beginning.
True, but you could make a warehouse of sysplexes work together using the same mechanisms we use for warehouses of generic servers, but, if each system takes four racks, and one sysplex takes 128 racks, it’ll be thousands of times fewer systems to be coordinated.
All that would remain is an eye-watering hardware and licensing bill.
The HPC folks around me broke hard for "performance/price is the main thing" circa 2000 or so once scalable systems became feasible. The "counter" if it is one is that you might want a high-end (Infiband) or specialized (all the stuff in BlueGene) communications framework.
Given that, having to manage two layers of parallelism to maximize some super-expensive hardware seems like a non-starter whereas I think the appeal of zArchitecture is that you can use a set of well-developed tools and frameworks like DB2 and CICS to build a certain sort of application -- the early motivation for Sysplex was that IBM had to make a transition from bipolar to CMOS transistors and the first CMOS mainframes could not equal the performance of the biggest bipolar mainframes so they needed to get N CMOS mainframes to do the job of one bipolar where N is a small number.
The vision I do get out of this idea is some kind of system that has a very smart compiler that looks at things in a fractal manner, that is it knows you can apply SIMD to a calculation and then you can apply SMP to it, and then you can apply clustering techniques and who knows, a "cluster of cluster" might make sense for geographically distributed situations. I think of
https://en.wikipedia.org/wiki/Cache-oblivious_algorithm
but not so much the algorithm being oblivious but rather the compiler very much models opportunities to parallelism and the costs of moving data around so the applications developer can be "oblivious" about it all.
I don't think there is a problem where a warehouse of networked z17 sysplexes would be a cost-effective solution, but, at the very least, it'd be incredibly cool.