My first Linux distro was Ubuntu in 2006, with a graphical installer from the boot CD. It was revolutionary in my eyes, because WinXP was still installed using a curses-like text interface at the time. As I remember, installing Ubuntu was significantly easier than installing WinXP (and then wireless Internet support was basically shit in either OS at the time).
GamingChairModel
Cutting edge chip making is several different processes all stacked together. The nations that are roughly aligned with the western capitalist order have split up responsibilities across many, many different parts of this, among many different companies with global presence.
The fabrication itself needs to tie together several different processes controlled by different companies. TSMC in Taiwan is the current dominant fab company, but it's not like there isn't a wave of companies closely behind them (Intel in the US, Samsung in South Korea).
There's the chip design itself. Nvidia, Intel, AMD, Apple, Qualcomm, Samsung, and a bunch of other ARM licensees are designing chips, sometimes with the help of ARM itself. Many of these leaders are still American companies developing the design in American offices. ARM is British. Samsung is South Korean.
Then there's the actual equipment used in the fabs. The Dutch company ASML is the most famous, as they have a huge lead on the competition in manufacturing photolithography machines (although old Japanese competitors like Nikon and Canon want to get back in the game). But there are a lot of other companies specializing in specific equipment found in those labs. The Japanese company Tokyo Electron and the American companies Applied Materials and Lam Research, are in almost every fab in the West.
Once the silicon is fabricated, the actual packaging of that silicon into the little black packages to be soldered onto boards is a bunch of other steps with different companies specializing in different processes relevant to that.
Plus advanced logic chips aren't the only type of chips out there. There are analog or signal processing chips, or power chips, or other useful sensor chips for embedded applications, where companies like Texas Instruments dominate on less cutting edge nodes, and memory/storage chips, where the market is dominated by 3 companies, South Korean Samsung and SK Hynix, and American company Micron.
TSMC is only one of several, standing on a tightly integrated ecosystem that it depends on. It also isn't limited to only being located in Taiwan, as they own fabs that are starting production in the US, Japan, and Germany.
China is working at trying to replace literally every part of the chain in domestic manufacturing. Some parts are easier than others to replace, but trying to insource the whole thing is going to be expensive, inefficient, and risky. Time will tell whether those costs and risks are worth it, but there's by no means a guarantee that they can succeed.
No, X-rays are too energetic.
Photolithography is basically shining some kind of electromagnetic radiation through a stencil so that specific lines are etched into the top "photoresist" layer of a silicon wafer. The radiation causes a chemical change wherever a photon hits, so that stencil blocks the photons in a particular pattern.
Photons are subject to interference from other photons (and even itself) based on wavelength, so smaller wavelengths (which are higher energy) can fit into smaller and finer feature size, which ultimately means smaller transistors where more can fit in any given area of silicon.
But once the energy gets too high, as with X-ray photons, there's a secondary effect that ruins things. The photons have too much leftover energy even after hitting the photoresist to be etched, and it causes excited electrons to cause their own radiation where high energy photons start bouncing around underneath, and then the resulting boundaries between the photoresist that has been exposed to radiation and the stuff that hasn't becomes blurry and fuzzy, which wrecks the fine detail.
So much of the 20 years leading up to commercialized EUV machines has been about finding the perfect wavelength optimized for feature size, between wavelengths small enough to make really fine details and energy levels low enough not to cause secondary reactions.
2 lanes in each direction with a middle lane? That's a big chunk of Texas, especially when weighted for population.
How will it reduce demand for parking? Do you envision the car will drop someone off and then drive away until it finds a parking spot that's farther than the person would want to walk?
Plenty of high demand areas use human valet parkers for this issue. The driver drops off their car at the curbside destination, and then valets take the vehicle and park it in a designated area that saves the car driver some walking.
Then, the valet parking area in dense areas has tighter parking where cars are allowed to block in others. As a result, the same amount of paved parking spot can accommodate more cars. That's why in a lot of dense cities, garages with attendants you leave keys with are cheaper than self-park garages.
Automated parking can therefore achieve higher utilization of the actual paved parking areas, a little bit away from the actual high pedestrian areas, in the same way that human valet parking already does today in dense walkable neighborhoods.
and people wouldn't be happy waiting 5-10 minutes for their car to navigate back to them.
As with the comparison to valets, it's basically a solved problem where people already do put up with this by calling ahead and making sure the car is ready for them at the time they anticipate needing it.
Once again reinventing buses and trains
Yes! And trains are very efficient. Even when cargo is containerized, where a particular shipping container may go from truck to train to ship, each individual containerized unit will want to take advantage of the scale between major hubs while still having the flexibility to make it between a specific origin and destination between the spokes. The container essentially hitches a ride with a larger, more efficient high volume transport for part of its journey, and breaks off from the pack for the portions where shared routing no longer make sense.
The default in most other states is that opposite direction traffic on a divided highway don't have to stop. The states differ in what constitutes a divided highway, but generally at least 5 feet of space or a physical barrier between the lanes would qualify. In Texas, however, there is no exception for divided highways, and the key definition is "controlled-access highway," which requires on/off ramps and physical barriers between traffic directions, or "different roadways,"
So for a 5-lane road where there are 2 lanes going in each direction with a center lane for left turns, Texas requires opposite direction traffic to stop, while most other states do not.
Waymos were violating a Texas state law that requires cars to stop when a school bus stops, even in 2+ lane roads separated by a paved median, even for traffic going in the opposite direction:
https://liggettlawgroup.com/wp-content/uploads/2019/09/School-bus-laws-img-1024x657.png
The requirements for opposite side traffic in multi-lane roads is pretty rare and might be unique to Texas. And yes, human drivers fuck this up all the time, too, leading to a lot of PSAs in Texas, especially for new residents.
Paradoxically, the large scale deployment of self driving cars will improve the walkability of neighborhoods by reducing the demand for parking.
One can also envision building on self driving tech to electronically couple closely spaced cars so that more passengers can fit in a given area, such that throughout of passenger miles per hour can increase several times over. Cars could tailgate like virtual train cars following each other at highway speeds with very little separation, lanes could be narrowed to fit more cars side by side in traffic, etc.
Most importantly, the projections of fusion being 30 years away depended on assumptions about funding, when political considerations made it so that we basically never came anywhere close to those assumptions:
https://commons.wikimedia.org/wiki/File:U.S._historical_fusion_budget_vs._1976_ERDA_plan.png
Fusion was never vaporware. We had developed working weapons relying on nuclear fusion in the 1950's. Obviously using a full blown fission reaction to "ignite" the fusion reaction was never going to be practical, but the core physical principles were always known, with the need for the engineering and materials science to catch up with alternative methods of igniting and harvesting the energy from those fusion reactions.
But we never really devoted the resources to figuring it out. Only more recently has there been significant renewed interest in funding the research to make it possible, and as you note, many different projects are hitting different milestones on the frontier of that research.
Writing 360 TB at 4 MB/s will take over 1000 days, almost 3 years. Retrieving 360 TB at a rate of 30 MB/s is about 138 days. That capacity to bitrate ratio that is going to be really hard to use in a practical way, and it'll be critical to get that speed up. Their target of 500 MB/s is still more than 8 days to read or write the data from one storage platter.
I think with cheaper consumer desktops using IDE hard drives, that worked out of the box, but some more exotic storage configurations (SCSI, anything to do with RAID) were a little bit harder to get going.