Stochastic Computing

3 days ago 2

First set of examples here:

Examples of group madness in technology

Many of these taken from the comments on the last one; thanks bros. Again, one of the worst arguments I hear is that “thing X is inevitable because the smart people are doing it.”  There are tons of examples of smart people doing and working on stupid things because everyone else is doing it. Everyone conveniently forgets about these stupid things the “smart people” did in the past; blinded by modern marketing techniques trumpeting The Latest Thing. It’s one of my fundamental theorems that “smart people” are even more susceptible to crazes and tulip bulb nonsense than primitive people, mostly because of how they become “smart.” Current year “smart people” achieve their initial successes by book learning. This is fine and dandy as long as someone who is actually smart selected good and true books to read. The problem is, current year “smart people” take marketing baloney as valid inputs. Worse, they also take “smart people” social cues as important inputs: they fear standing out from the herd, even when it is obvious the herd is insane. It’s how stupid political ideas spread as well.

That’s how we have very smart people working on very obvious nonsense like battery powered airplanes. Just to remind everyone: electric motors are heavy, and batteries are like 100x heavier than the equivalent power density of guzzoline. To say nothing of the fact that batteries take a lot longer to charge than filling up gas tanks. If you want to make air travel greener, cheapen the FAA testing requirements on certifying small engines for flightworthiness.  We still use designs requiring leaded gasoline from the 1950s, back when certs were cheaper because of the lack of this bureaucratic overhead. You can get 2-4x more fuel efficiency this way. Better than idiocy like hoping a battery operated airplane will work. You can also just make it 10x more expensive so there are less scumbags from America (and everywhere else: I don’t like you either unless you’re Japanese) filling up my favorite places.

Distributed manufacturing. This is a recent one where solid printers are indeed useful and have become important tools in various applications, but people attributed magical properties to additive manufacture. Lots of hobbyists now use plastic solid printers to make plastic telescope rings or Warhammer40k figurines. They get used in big boy machine shops to make enclosures for prototypes, molds and on occasion, metal sintered objects which would be extremely difficult to cast. Believe it or not, making enclosures was a very time consuming part of prototyping in my lifetime. I remember the first solid printed enclosure I got, I thought it was wonderful. Molds; you can now email the pattern to people rather than mailing hand-carved styrofoam molds. Metal sintering solid printing is extremely time and energy intensive (and by nature probably always will be), but it’s totally worth it for something thin with lots of internal structure, like a rocket nozzle. All this is great and welcome progress, but it’s not how it was sold to people maybe 15-20 years ago. Back then, people were overtly saying it would be the end of centralized manufacturing. Every neighborhood would have a star trek replicator which would make them stuff from plans emailed over the internet. This was very obviously the sheerest nonsense to anyone familiar with objects made out of matter and how they are made, yet it was uncritically repeated and amplified by millions of people who should know better. In fact it’s still uncritically repeated, though I guess it is less of a craze than it was 20 years ago as more people have experience with these things. It’s tough to get excited about the “materials savings” for solid printing things when you’re paying 50 bucks for the feedstock needed to make a 2 inch tall Yoda figurine. Back in 2012, there was a mass hysteria about solid printers making guns. As I pointed out, you could make guns of similar quality whittling pieces of wood, or using pipes you get from the hardware store. The only reason this is viable is the legal “gun” part in the US is the lower receiver, which is not a part that needs to be made well to function on an AR-15. Magic star trek replicators for AR-15s, alas, are not going to be a thing in our lifetimes. You can make them on machine tools though, and machine tools don’t cost much. Nobody would have worried about solid printed “guns” if they didn’t think solid printers were magic star trek replicators, which, back in 2012, they did. FWIIW the lizard men at WEF still are trying to sell this, at least when combined with IoT, AI, 5G and … gene editing. Absolute proof nobody involved with WEF has ever manufactured any object of worth to humanity.

The Paperless Office was a past craze. Lots of companies were based around this idea. I never bought into it; I had access to the best screens available at the time, and still sent all the physics papers to the laserjet printer, or walked to the library and photocopied journal articles. Even if you make giant thin PDF readers for portability, it’s a lot easier to scribble notes on a piece of stapled paper, which is also easier to read, lighter, lasts longer, and doesn’t need a charge on the batteries. Some of the ideas in the byte articles above ended up being used in search engines. Stuff like collaborative documents was a useful idea. The OCR approaches in use at the time made scanned documents pretty worthless; not sure that’s improved any. Still lots of paper around every office and barring some giant breakthrough in e-ink, always will be. The vision of paperless sounded real good; you could search a bunch of papers in your pre-cloud document fog, but the reality was you’d do a shitload of work and spend a shitload of money and having a filing cabinet with paper organized by paper subject tabs still worked a lot better and was a zillion times cheaper. BTW I still maintain having a couple of nice nerdy ladies with filing cabinets, telephones and fax machines is more economically efficient than running a database system for your business for like 98% of situations. That’s why the Solow “paradox” is still in play. Pretty sure that’s why places like Japan and Germany still use such systems instead of generating GDP by firing the nice nerdy ladies and hiring a bunch of H1b computard programmers and buying a lot of expensive IT hardware.

Lisp unt Prolog -yes my favorite programming language is a lisp. Prolog nonsense was same era, both were intertwined with the 5th generation computing project, but as Lisp is still a cult language, and insane people still use Prolog, it’s worth a few words. Prolog is easily disposed of: it is trivial to code up constraints which have NP-hard solutions in Prolog. That’s kind of how inherent to how constraint programming works. This probably looks absolutely amazing on 1988 tier technology. 1989 tier technology was 32 bit, hard fixed to 2GB. Actually real life fixed to 64MB because thats how many chips you can stuff into a SparkStation-1.  That was a giant, super advanced machine of its day; running a Prolog compiled by Lisp (which took up a good chunk of this memory), you actually could solve NP-hard problems, because there isn’t much to them when you’re constrained to such small problems. This was even more true on the 24bit 1mb Lisp machines. The idea that adding a couple of bits to the result you were interested in would explode the computation didn’t occur to people back then. We should all know this by now and avoid doing any  Prolog without thinking about how the compiler works (in which case, why not use something else where it is more obvious), or waiting for super-Turing architectures which think NP=P (or using it as a front end for some kind of solver).
Lisp has a different, more fundamental basket of problems. You can easily write a Prolog in Lisp, which one might notice is a serious problem considering above. This is given as a student exercise by Lisp’s strongest soldier, who, despite being a rich VC, doesn’t seem to have gotten rich investing in any successful Lisp companies. He claims he got his first pile of loot  writing a bunch of web shop nonsense in Lisp (which later got translated into shitty C++). I dunno I wasn’t there back in 95; we used Fortran in those days. Even by his own admission, it was a creation of N=2 programmers, very possibly mostly N=1, and it was not modifiable or fixable by anybody else. I think that’s what is cool, and what is wrong with Lisp. You can write macros in it, and modify the language to potentially be quite productive in a small project: but how are others supposed to deal with this? Did you write an extensive documentation of your language innovations and make everyone on the team read it? What happens when N>3 people do this? Is it an N^N problem of communication when N people write macros?  In R (an infix version of scheme), people deal with this by ignoring the packages which use certain kinds of alterations of the language (aka the Hadleyverse which I personally ignore religiously), or just embracing that one kind of macro and only doing that thing. Maybe it’s better to keep your language super bureaucratic and spend 400 lines of code every time you send some data to a function to make sure the function knows what’s up. That’s how almost everything that has successfully made money has done it. They all use retard languages that are at least OK at solving problems, not mentat languages that self modify as part of the language specification. Maybe Paul Graham got lucky back in the 1995 because generating valid HTML which holds state was something one or two dudes could do in Lisp. It wasn’t like they had very many choices; most languages sucked at that sort of thing; in fact, in year of our Lord 1995 a lot of people developed programming languages designed to emit stuff like a valid HTML webstore: Javascript, Java, Ruby,  PHP are examples we all remember, and which went on to create trillions in value. That is greater value than anything Lisp has ever done, basically by being kind of limited and “squirting stateful HTML over wires” domain specific retarded and not giving users superpowers to easily modify the language. One of the fun things about Paul Graham’s big claims about Lisp is we know for a fact it all could have as easily been done in Perl: because, actually, it was done in Perl, multiple times. Perl was not only more productive in terms of value created, it was more legible too, and amenable to collaboration. Lisp of course had the ability to mutate  HTML with state: it was a sort of specification language for other languages. That’s what first-gen AI was inherently; custom interpreters. Maybe if they just solidified the macros and made everyone use them, or, like, wrote a library, it would still be used somehow.  Anyway, fuck Lisp, even if I am overly fond of one of its dialects.

CORBA was a minor craze in the mid-90s. I remember building some  data acquisition/control gizmo using RPCGEN; took like a day of reading the manual despite never doing anything like that before. As far as I know my smooth brain thing still functions to this day. An architecture astronaut two beamlines over wondered why I didn’t use CORBA. As I recall, his thing didn’t quite work right, and never actually did, but as he was senior to me I just told him I didn’t know C++ very well (plus it didn’t work on VxWorks and lacked a Labview hook). I never learned about this thing, but I think its selling point was its “advanced suite of features” and its object orientation. It was a bit of a craze; if you go look at old byte magazines you’ll find software vendors bragging about using it in the mid 90s. Java Domino, Lotus Notes; are you not impressed? Did these CORBA things not set the world on fire? If you look at what it actually was, it looks like a student project to make teacher happy with fashionable object orientation rather than something used to solve real problems.

Come to think of it, what ever happened to Object Oriented Everything? I remember in the early 1990s when I was still using Fortran, people were always gabbling on about this stuff. People selling the idea would have these weird diagrams of boxes with wires connecting to other boxes; you can tell it was designed to appeal to pointy headed managers. I couldn’t make much of it, thinking perhaps it might make sense for something which has physical box-like things such as a GUI. Later on I realized what people were really looking for was namespaces; something you could get a ghetto version of using naming conventions, or stuff like structs with function pointers in C if you want to get fancy. The other things, polymorphism, operator overloading and inheritance, usually these were not so helpful or useful for anything. People came up with elaborate mysticisms and patterns about objects: remember things like “factory objects” and “decorator patterns” and “iterator patterns?” You could make these nice block diagrams in UML so retards could understand it!  All this gorp was supposed to help with “code reuse,” but it absolutely didn’t: mostly it just added a layer of bureaucratic complexity to whatever you were doing, making it the opposite of code reuse: you had to write MOAR CODE to get it to do anything. You could probably trace a history of objecty dead ends looking at C++ “innovations” over the years: objects, generics/templates, eventually getting some functional features allowing one to do some programming that looks a lot like what we were doing using C macros, while maintaining backward compatibility with all 50 of the previous generation of C++ paradigms.

Related: this 5 minute video is worth the time if you’re still an LLM vibe coding respecter. The man has a simple argument; if LLMs are so great at writing code it’s going to replace people googling stack overflow, where’s all the code? He brings receipts from a few of the efforts to measure programmer productivity with LLM code assistants. Comments are funny too!

You can also transport yourself to 1991 and read a post-mortem of the first AI winter here.

Read Entire Article