Putting a USB SSD on My Ubuntu Machine: A Journey Through Confusion
Recently, I decided to add a USB SSD to my Ubuntu machine. Pretty straightforward task, right? So, I set it up, partitioned it, and formatted it with an ext4 filesystem. Then came the question: What happens if the drive isn’t connected when I boot my system?
Since it’s an external drive and not critical to my system’s operation, I didn’t want my machine to throw a fit if the SSD wasn’t present at boot time. So naturally, I turned to GPT-4 for advice.
GPT-4’s Advice: The “nofail” Option
GPT-4 responded quickly and gave me a clear suggestion: use the nofail option in the /etc/fstab file. This would ensure that the system attempts to mount the USB drive if present but continues to boot even if the drive is not connected.
That made sense, and GPT-4 also reassured me that using this option for non-critical filesystems is common practice. But something bugged me. “nofail” sounded counterintuitive—shouldn’t this be used for important, “must mount” filesystems? So I turned to my trusty search engine to verify what exactly “defaults,nofail” meant.
The Internet Confusion Begins
I did a quick search using Kagi (or Google—take your pick, I got the same results). On the first page, I came across this page from Rackspace’s docs: Rackspace Docs on Linux – Nobootwait and Nofail.
It flat-out stated the opposite of what GPT-4 had told me! It described nofail as an option for critical filesystems, implying that the system would wait until the drive was mounted. This seemed strange since I wanted the system to boot even when the drive wasn’t there. This increased my doubts, so I dug deeper.
Deeper Dive – The Web Only Adds to the Confusion
I kept browsing, checking multiple sources. Each page seemed to explain nofail slightly differently. Some agreed with Rackspace, others said the opposite. At this point, I was more confused than when I started. How could such a basic option be so misunderstood on the web?
The Answer Was in the Man Pages All Along
Frustrated, I decided to check the man pages—the original source of truth for Linux users. Sure enough, GPT-4 was right. The nofail option is specifically there to ensure that the boot process does not stop or fail if the specified filesystem is not present. Perfect for my use case, as I wanted the system to keep booting even if the USB SSD was missing.
Conclusion: Crazy World We’re Living In
So here we are, in a world where search engines and online documentation can sometimes steer you in the wrong direction, while an AI (GPT-4) was spot on from the beginning. It’s crazy how something as fundamental as mounting a drive can be so muddled online.
I’ve learned two things from this: first, always double-check your sources—especially with something as complex as Linux configuration. And second, never underestimate the value of consulting the man pages. In this crazy, confused world of information overload, sometimes the simplest and most direct solution is the right one.
Oh, and yeah, nofail is exactly what you need for external drives that you don’t want to hold up your boot process. Crazy name? Maybe. But it does the job.
And, yes, this was written by gpt4o as well, based on the cryptic text
Intersting.
putting USB SSD on ubuntu machine.
Asking gpt4o what to do. Works.
worried about this being external drive that boot would not stop when it is not there. So I ask, and it says should be fine, but to use “nofail” to be sure.
I think: weird naming for option in open source.
Use Kagi (or google, the same result) to look for “defaults,nofail meaning”
FIRST page
says opposite of gpt4.
I look further on the net. Confusion. Finally I look at the man page,
gpt4o was right. The Internet as seen via search is plain wrong.
Hit the random wikipedia page 3 times. Told gpt4o to make a story out of links I got and also a prompt that I can give to flux. Took the resulting image to runway (weekest link here, sucks, but this is not about making something good) Asked gpt4o to make a udio text for the image that I also gave extended it in udio (twice, I guess) added intro and outro Put it together in daVinci with obvious loops and fades. Done.
Nothing got cherry picked here. I could basically automate this, since I made NO CHOICES during the process. This is all the stuff how it fell out those various machines …
Bosch gab zum 125-jährigen Jubiläum 2011 ein Buch heraus. Auf Seite 54 findet sich diese Karte:
Ich fragte mich, wie das wohl als Animation aussehen würde. Früher wäre mir dieser Gedanke wahrscheinlich gar nicht gekommen. Das Gehirn wächst mit seinen Möglichkeiten. Heute ist es einfach möglich, aus dem Foto eine Animation zu erzeugen:
Das hätte ich zwar auch vor KI schon hinbekommen. Nur wäre es eben den Aufwand nicht wert gewesen. So wichtig ist es nicht für mich, die globale Entwicklung von Bosch zwischen 1897 und 1922 animiert dargestellt zu sehen. Wenn man KI benutzt, dann ist es nicht sonderlich aufwändig. Das Kosten-Nutzen-Verhältnis verschiebt sich. Nicht automatisch. Nicht magisch. Man muss ja immer noch wissen, was man tut, man muss wissen, was man will.
KI ist NICHT eine automatische Lösung. Es ist nicht so, dass alles heute mit einem einzigen Knopfdruck magisch entsteht. Amüsanterweise wird uns genau das versprochen, genau genommen seitdem es Computer gibt. Und trotzdem war es noch nie so.
Wie beim Eisberg gibt es in den aktuellen KI-Erwartungen aber auch Gruseliges unter der Oberfläche: Es war nie einfacher, Programmcode zu erzeugen, der scheinbar zu funktionieren scheint, es aber in der Realität dann nicht wirklich tut. Das war schon immer das Problem mit Programmierern, die ihre Arbeit nicht ausreichend beherrschen. Und dieser Personenkreis wurde plötzlich um das Hundertfache größer. Menschen, die in der Vergangenheit an Dingen wie Syntax, Dokumentation oder Lücken bei Stackoverflow scheiterten, können heute ihren Kunden und Arbeitgebern allerlei Unsinn unterjubeln. Und das passiert dann auch. Überall.
Crowstrike, never heard of them. Lucky me. Sorry for all who got impacted.
While this could very well be the first case of a broader “the AI ate my homework”, reality is never that simple, easy or straightforward. No matter if the story works nicely.
Wikipedia featured the fix for a while. No idea if it is legit or will entirely burn your machine to the ground. Upon further reading it feels that the source for this “fix” is legit, but it might be that it is not a panacea. In my, very limited, understanding deleting those channel files might give the system a chance to reload valid ones on the next boot.
The actual root cause of this incident will be interesting though. Strange that it is possible to push something faulty to this many machines. One would think that avoiding this would be one of the core issues of an org like Crowdstrike.
Good luck to everybody affected. Directly or indirectly.
To me, these pages illustrate nicely the strengths and weaknesses of AI right now: The language is free from any obvious errors I would notice. The fabricated facts have some consistency to them.
And yet, it is all uninspired. One cliché follows the next. It is a dense condensation of all our prejudices and current assumptions. Utterly dull and uninspired. AI-generated.
This distinction between generating and creating is crucial. GenAI can generate content by synthesizing existing information and patterns. However, it often lacks the spark of true creativity. Generated things are rarely genuinely new; they are recombinations of what already exists. Creation, on the other hand, involves originality and innovation—elements that are currently more characteristic of human endeavor.
It remains open if quantitative progress, which can surely be expected from AI —after all, we keep pouring yottaflops, gigadollars, and terrawatts into the thing— will lead to a qualitative leap eventually. Then AI could actually be creative. We will see. If we are lucky. Again.
July 2024 and it is still surprising how erratic LLMs can be when they get tasked to help with very small coding jobs. I work on many projects on my Mac. For a while, I have been using my own directory simple stack implementation. Remembering paths is what the computer can do for me.
It works well and has two parts: zsh functions loaded via .zshrc and a Python program doing the actual work, naturally except for the cd, change of the prompt.
I thought it would be nice if my ‘cd’ variation could check if a Python environment under */bin/activate would exist in the directory I changed into. If so, it can source it. If there is none, then it should not care, and if there are multiple, it should list them so that I could pick and choose.
Simple enough.
Parts would require zsh shell coding. Not something I tend to do a lot. Since Sonnet 3.5 has a limit even in the paid version, I tend to use my paid gpt4o first.
For this simple thing, I should not have. Today gpt4o was stunningly stupid. It managed to do zsh syntax well enough, but then completely failed. For a while, stuck in that dreadful loop where one hopes the next version would finally work. I still abort those loops of idiocy way too late.
Claude 3.5 got it right. In my frustration, I also had introduced a bug / typo on my end. Both gpt4o and Claude would have pointed it out easily if they had seen that part of the code. Claude stood out since its debug hints let me see what I had done wrong. That was beyond my current expectation horizon.
Speaking of: I am amazed how dumb LLMs still can be. Today gpt4o was utterly stupid. Not sure why. Is it zsh that it is not familiar with? Did the system prompt that got assigned to me, or my region, suddenly change? Who knows.
It must be hard to make a living based on some expectations from LLMs. They are really awesome, but can fall off a cliff at any point. Pretty much the opposite from computer work in general.
I expect that people develop all sorts of Cargo Cults in their work with these tools.
What does Meta do? It turns people into money. Those that are on the Internet, that is—not in a Soylent Green kind of way.
At least, that was the mantra up until 2018. Then Cambridge Analytica broke. And the Q2 2018 earnings gave an inkling of the possibility that not a fixed—and also rather large—ratio of people entering the Internet would become, just like magic, Facebook users.
Later, people seemed to forget about the fact that they get algorithmically nudged in Zuck’s wonderland every step of the way. Wall Street itself realized that revenues at 1 Hacker Way actually kept on rising—until they jumped in 2021. COVID, remember?
The Metaverse, however, wasn’t really that great of a hit, and after the virus bonus revenue fell back in line the following year, FB lost a staggering two-thirds of its value. A trillion-dollar meme stock.
An attribute that it then turned into current heights via hitching itself to the AI bandwagon.
Releasing the LLaMA weights is undoubtedly a commendable move. It sounds utterly impressive when you can claim, “While we’re working on today’s products and models, we’re also working on the research we need to advance for LLaMA 5, 6, and 7 in the coming years and beyond to develop full general intelligence,” in an earnings call. Pretty much like that strange man proclaimed five years ago: “I want 5G, and even 6G, technology in the United States as soon as possible.” Numbers: They go up, up, and up.
Hype aside, I am not really aware of any practical applications for LLaMA 3. Zuck bought lots of GPUs. Both Jensen and I are happy about that. Maybe they thought they had all this data that people have entered in their apps. Maybe they could train a LLM on it. With GPT-3, there was this notion that the size of the training corpus was all that mattered. After all, OpenAI’s chatbot was such a wonder, and it jumped into existence just via the increase of its training data. I speculate that a trillion training tokens derived from FB discussions yield surprisingly little meaningful reasoning power. Especially compared to actual content like, for instance, Wikipedia.
The pressure to come up with something must have weighed heavily on 1 Hacker Way. As those two transformer-based applications (LLMs and Image Diffusers) broke into public view and kicked the world into a frenzy that seemingly became the new normal, Meta itself had just spent around $50 billion on developing, well, the Metaverse. Which received rather little positive reaction, to put it mildly.
The total and utter failure of Zuck’s idea to come up with a whole new thing left Meta with no choice but to jump on the AI hype PR scheme. And up to this day, it has worked rather well. While revenue is ticking along as expected, the stock is kissing new heights. For now.
So, what’s next? Nobody knows.
What will happen is that Internet population growth will end. There are simply no more people left that could join. Pretty much everybody who could go online already has done so. While 25% of the world’s population are younger than 15, many of them live in underdeveloped parts of Africa. Furthermore, young people hardly flock eagerly into the Meta family of products once they get their first Internet device.
Meta’s revenue growth would therefore stall together with the plateau in its user count. While they continue to make a lot of money, a PE ratio of currently around 30 is expecting something else: More money. You need to grow profits to justify such a valuation. A quick way to bump revenues would be to reduce costs. Twitter is still up and running, despite Mr. Musk letting go of most of its workforce. A tempting move that could save the numbers for a quarter or two at Menlo Park as well. The problem is that this approach works only briefly: Costs go down to zero. But not more.
Which means that Meta needs to increase revenues while user numbers can no longer grow.
Can Zuck’s companies accomplish that? They might, but it would not be pretty: Billions of people have delegated a great part of their social existence into the “Meta Family of Products”. (What’s in a name?) A sticky situation in itself. Add to that the addictive aspects that rival nicotine, and you realize that half the planet as a user base won’t go anywhere fast.
Wealth as well as the inflexibility to change app use or social topology both tend to grow with increasing age. Meta owns people’s time and attention in staggering amounts.
Here comes the part that isn’t pretty: it is rather easy to manipulate people online. Tech is able to do it. And will increasingly be. There is a threshold after which you no longer realize that you got nudged.
When the magician manages to direct your attention successfully, all sorts of things are possible. With a serious difference: Magic lives from the effect, that the outcome shows you, that you must have missed something. You are supposed to notice that it is impossible what just happened.
Manipulation to gain, aka advertisement, has a different aim: You should be made to act in certain ways, all the while thinking that you want to do that.
The total spending of Meta family users is responsible for a mind-boggling share of GDP. And, as discussed, most of the users will not go anywhere. If Meta does not f*ck up royally, pretty much half of the global adults will continue to point their noses, eyes, minds, and wallets its way.
Turning on the manipulation engine will not be one deliberate conscious act or one magnificent large piece of software. Lots of little changes will yield lots of little benefits. With billions of people, you can do a whole lot of A/B testing. Nobody will notice. Everybody’s feed is different and the fact that you see wording that is ever so slightly different will not trigger any of the societal mechanisms that will raise a reaction.
Jacob Riis used flash photography at the end of the 19th century to show the world how poor people lived in NYC, and he changed the world for the better. I cannot imagine how we can illuminate the modern plight of getting nudged into an ultimately unhappy existence that looms on the horizon.
What happened? Some bean counters at Best Western figured out how much money gets spent on heating water for the guest showers. Probably a fair bit, since BW Hotels operates 320,000 rooms globally.
So let’s put in those water-saving devices.
Yes, actually taking a shower is pretty much impossible now. Your body still gets wet. Somewhat, in some places. But forget about mundane actions like washing the soap away from the skin. That measly drip coming out of the device will not do that. Water is an essential feature of taking a shower.
BW Hotels think that this is not a given.
The amazing thing is, that somebody thought to put up a sign reading:
Dear Guest,
Please be
advised that
this shower is
equipped
with a gentle,
“Rainwater”
style head.
It is not a
“Power” shower
Now we know. It is not a bug, it is a feature. Bugs are unintentional. This one very much is intentional. And it sucks. Really badly.
So, if I have the choice to stay in a Best Western or in any other establishment I know what to do.
Trying to reframe what is normal into something special, that got replaced by something special, is evil. Intentional, and I will not support it.
The TV in the room was from 2011. A lot has happened in TV tech since then.
LLMs have their limits, and where they excel makes a difference. As of June 2024, they continue to evolve. Anthrop\c Claude 3.5 works well for coding simple things with Python. It feels like the LLM has been heavily trained on existing code. Actually, it might be just as good in other applications as well. I wouldn’t know since I only use it for coding right now. Even on the paid plan, it has a message limit, which feels very 2023. So, I use the limited interaction where I get the highest benefit, which is coding. The artifact window is a great idea, and the speed of generation is appreciated. With gpt4o, I had to interleave work: make a request, switch to a different task while gpt4o sputtered out characters at Morse code speed. It probably runs on a colony of squids at the bottom of the Mariana Trench that OpenAI taught how to use Morse code with each arm.
And yes, an image like this I create with gpt4o. I don’t even know if Claude can do that. I don’t mind having multiple LLMs. I am gladly paying for both of them, as I do for search. Right now, I am very happy that there is more than one solution. I tried to use Google AI, but it was too complicated to figure out. To find the offering that fit mit my needs. And I am not aware of a key feature that I could only do with them. They already have all my email, read the entire Internet. If I can avoid it, I would not like to help them any further. Sure, if they were as good at coding as Claude, I would use them in a second. I have morals, but I cannot save the world single-handedly either.
One of the bigger fears I have is that LLMs might take the same turn that Google Search did. It was a great idea. It worked great, allowing for a phase of the Internet in the early 2000s that was very promising. Then it became what we suffer from today—a swamp. Barely functional. Generating around $150 profit for Google per user annually. Which means companies make more. Which means that I loose even more than that. The costs of using Google Search by being manipulated are much higher now than its benefits. The SEO world that Google Search presents is not a nice one. I happily give Kagi money to have some distance from that swamp.