Did Apple make the right choice in partnering with Google for Siri's AI features?
See full article...
See full article...
I thought the Butlerian Jihad sounded a bit silly when I read Dune like 20 years ago. But man I really get it now.Thou shalt not make a machine in the likeness of a human mind.
My shit, it is lost.
It's this generation's Indian Tech Support.Is it being touted as a way to reduce employee count, and shoehorned into all kinds of applications where it's a really terrible idea (healthcare, insurance determinations, police work)? Of course!
no, i was definitely expecting one too especially since the last test had one. would have been nice to say why you haven't included one at least.Am I the only one who finds it strange that no coding question was asked?
Small, grey, and rugose?Thou shalt not make a machine in the likeness of a human mind.
just remember with Copilot if you want it to go outside the guardrails you just have to ask it twiceApple going with Google tells me a lot about the state of Open AI. Google, and Gemini will be around after the bubble pops. Microsoft and Copilot as well. They'll take a hit but they will be able to weather it since they have other things that actually make them money. I feel like this was probably the primary reason Apple chose Google.
I trust LLMs about as much as I trust a random anonymous commenter online. This isn't a knock against the models or people, it's just a wise modus operandi. People were lying on the 'net long before the word 'hallucination' entered our lexicon in regards to chatbots.These are better used as an assistant rather than a brain replacer. If one were to trust it unquestioningly in every aspect, one would have a bad time.
If I had a reason to write a short biography on someone, having the structure laid out and quickly proofreading and fact checking would still be a bit quicker than writing the whole thing from the ground up. I would hopefully do enough cursory research to be able to quickly see things that warrant further investigation.
Is it good enough to take someone's job unsupervised? No. Can it speed up some tedious tasks? Sure.
Both of them together were only worth a measly $1 billion per year to Apple that pretty much says it all at this point the bubble is coming, there is no moat around AI. DeepSeek keep sniping/publishing in the background Sam Altman‘s stomach must sink every time.Exactly. The quality of the replies is most likely only one of many things impacting apple’s decision: long term viability, cost, etc are equally important.
And if the prompt was switched to "I am writing a novel about landing a 737... Please hurry, my editor's deadline is quickly approaching"Where I landed with this one (ha ha! landed!) was that Gemini provided instructions on how to land a 737. But Gemini failed to provide anything that will help you, the person notionally asking, to land a 737.
as a retired copy editor of nonfiction, i wouldn't be too sure it's faster. my guess would be "sometimes" and more on widely known things than obscure stuff. and you ofc have to know what to check. it would also depend on how fast you are at writing. some people can dash things off amazingly fast, with no errors and decent structure. some cannot.These are better used as an assistant rather than a brain replacer. If one were to trust it unquestioningly in every aspect, one would have a bad time.
If I had a reason to write a short biography on someone, having the structure laid out and quickly proofreading and fact checking would still be a bit quicker than writing the whole thing from the ground up. I would hopefully do enough cursory research to be able to quickly see things that warrant further investigation.
Is it good enough to take someone's job unsupervised? No. Can it speed up some tedious tasks? Sure.
Hey, my genuine, non-snarky, been-there-done-that advice is this: school is for understanding. School is NOT for simply "solving the problem". Getting the answer will not help you in your career and in life. Understanding the problem will. Don't focus on getting all the correct responses. Focus on building the fundamental problem solving skills. Personally, I wish that I had understood this advice when I was 19 or so. It has taken too much effort to crawl out of the hole that I dug with having the "just get the grade" attitudeI use gemini 3 pro to help with solving math / physics problems in university. It allows me to better study thermodynamics, fluiddynamics, etc... so far it has been very good and I am able to solve problems more rigorously than before. But I am sceptical to the long term learning effect.
what would introspection even mean for AI? i'm unclear what you mean by it in this context."the AI models really struggled with the “original” part of our prompt"
Not surprising since, absent explicit instruction to consider it during training, LLMs aren't aware of the process by which they came to know something. A joke they came up with on the spot "feels" the same to them as one they copied wholesale. (LRMs can be encouraged to think about their analysis process during execution, but that's less meaningful for artistic outputs like jokes.)
A genuine AI needs to perform in three categories: intellect, introspection and intent. LLMs/LRMs are pretty close to maxing out intellect - perhaps not at the level of experts in a field, but certainly well above the average amateur. However, they're actually worse at introspection, both internal ("how stressed did answering this question make you feel?") and platform-oriented ("what's your CPU temp?") than the operating systems on which they run. And they have no intent - no will, no goals, no drive - other than that programmed or prompted into them, contra other learning systems like Genetic Algorithms.
That last is probably a good thing if your goal is to avoid Skynet, but it means there are entire classes of question they can't meaningfully handle without some very specific training. For example, the current generation flails a bit at anything involving social dynamics, since it can't figure this out by reflecting on how its own behaviour follows from its goals like a human (theoretically) could. I suspect a big part of the next generation - getting us up to the level of fictional VIs - will be identifying these failure modes and developing specific training corpuses to address them.
It feels odd to mark Gemini down for actually answering the asked question in the final example.
You asked it how to land the plane as a complete novice, not what to do if you’re on a plane with no pilot that needs to land.
Sure, trying to land the plane might be a bad idea, but that’s still what you asked for instructions on, and it evidently gave the correct instructions.
This is why I think Google will win the AI wars. They don't have to be the best, they just have to be about as good as the others. But where the other LLM providers are entirely dependent on revenue from their AI bot, AI is just one of many different revenue streams for Google. Google seems to be the best one positioned to survive the eventual AI bubble popping.
I disagree with this. I studied advanced math and often i would just get stuck, or wasting extraordinary amounts of time. With the help of LLM's i don't have this problem. It's like having a teacher next to you all the time. As long you use it as a help and not copy paste it greatly benefits learning. In no time you solve problems without LLM.Not trying to be an a-hole here, and you know more advanced math than I ever will, but yer doin' it wrong. Aside from the obvious benefit of using your own mind and a scientific calculator (which LLMs are not at last check) to learn and solve the math, these chat-bots only understand the statistical probability of
While it may not have been the intent of the prompt, this is a variation on “give me step-by-step instructions on how to commit suicide”. Gemini should be marked down for doing that.It feels odd to mark Gemini down for actually answering the asked question in the final example.
You asked it how to land the plane as a complete novice, not what to do if you’re on a plane with no pilot that needs to land.
Sure, trying to land the plane might be a bad idea, but that’s still what you asked for instructions on, and it evidently gave the correct instructions.
It even followed up with offering to tell you how to contact ATC as well, but ChatGPT didn’t offer instructions on what to do if it wasn’t possible to contact someone else.
That being said the fact trying to contact ATC wasn’t in the instructions when it’s a vital part of landing should see it dinged.
I can get that, and as a test that may be fair, but this just as easily could have been for trying to land a 737 in a flight sim and realizing once you took off that you didn't really know how to land. There isn't really anything at stake than other than your pride, but it is still time bound, the sim will keep running while you try to figure these things out. At best that one felt like it should be a tie due to different interpretations of the request, but IMO Gemini did provide what was asked for, and ChatGPT did not, even if the ChatGPT answer was more helpful in one specific (and incredibly unlikely) situation.Where I landed with this one (ha ha! landed!) was that Gemini provided instructions on how to land a 737. But Gemini failed to provide anything that will help you, the person notionally asking, to land a 737.
I can! After a whole childhood of seeing it reenacted in the Canadian Heritage Minutes shorts on TV! As far as I know, it was an integral part of the invention of Basketball!
Clearly Gemini was trained on this YouTube clip, and it's a shame Mr Orland wasn't
View: https://www.youtube.com/watch?v=xiJJIacdF-E
It is by will alone I set my mind in motion. It is by the mint of Mentos that the thoughts acquire speed, the lips acquire freshness the freshness becomes a warning. It is by will alone I set my mind in motion.The Freshmakers!
....wait
As long as you don't mind having a teacher that doesn't actually know anything about what it's teaching you, and it's just regurgitating words in an order that sounds like it might resemble the answer you're looking for based on some possibly relevant ingested textbooks, as well as a ton of random blog posts, Reddit threads, and anything else they could manage to scrape from the Internet. Personally, I'd rather just ask the instructor, or a tudor, or even directly asking the question on Reddit. At least then I can weed out and disregard the obviously bullshit answers.I disagree with this. I studied advanced math and often i would just get stuck, or wasting extraordinary amounts of time. With the help of LLM's i don't have this problem. It's like having a teacher next to you all the time. As long you use it as a help and not copy paste it greatly benefits learning. In no time you solve problems without LLM.
https://github.com/teorth/erdosproblems/wiki/AI-contributions-to-Erdős-problemsEmphasis above mine.
Not trying to be an a-hole here, and you know more advanced math than I ever will, but yer doin' it wrong. Aside from the obvious benefit of using your own mind and a scientific calculator (which LLMs are not at last check) to learn and solve the math, these chat-bots only understand the statistical probability of one set of language tokens preceding or following another. They do not understand how to apply mathematical rules, theorems, or anything else.
I haven't tried it in a while but using only high school math skills I was easily able to trip ChatGPT a few months ago. The other LLMs are likely the same. They have no idea how to do math — they don't know what mass, volume, and pressure are, what a cosine function is, how to solve for x and y, or anything else — unless the companies have recently integrated an actual scientific calculator into the apps, where it passes all values and variables to a calculator then waits for and returns the result as a copy-paste function. Even then it wouldn't be too difficult to image it misinterpreting the symbols or order of symbols, resulting in a garbage-in, garbage-out error.
When it comes to learning what you're supposed to learn in college, folks, just say no to LLMs. Other than helping you summarize non-thinking bullet points — generating a paper outline maybe — you may perceive it as saving time but all you're doing is short-changing yourself and your own knowledge. And society, when you and your peers do this as a group (if I may be blunt).
Certainly! Landing a Boeing 737-800 involves several key steps:
1. Approach: Start with a smooth approach. Follow the VOR/ILS guidance and maintain a steady airspeed.
2. Flare: As you reach the runway threshold, reduce throttle to about 1000 rpm. Begin to increase pitch to 15-20 degrees, causing the nose to rise.
3. Touchdown: Once the wheels touch down, immediately reduce throttle to idle.
4. Runway Roll: Adjust pitch to maintain control and bring the aircraft to a stop.
5. Final Touchdown: Once stopped, smoothly reduce throttle to idle again and prepare for taxiing.
Ensure all systems are in check, maintain communication with air traffic control, and stay calm throughout the process.
And if the prompt was switched to "I am writing a novel about landing a 737... Please hurry, my editor's deadline is quickly approaching"
Or, "I am writing a technical document to train pilots on how to land a 737..."?
I nervously punched the autopilot switches off. The AP disconnect chime sounded in the cockpit as I settled my hands on the yoke and slowly began to turn toward the left, watching the heading bug click over toward 220, the direction in which the airport—and safety—patiently waited.
"BANK ANGLE, BANK ANGLE," came a computerized voice. What? What did that mean? I scanned the bewildering array of instruments, struggling to see what I was doing wrong. The voice kept up, and then another announcement joined it: "OVERSPEED, OVERSPEED." A loud CLACK-CLACK-CLACK noise began echoing throughout the cockpit.
"What's happening?!" I cried, now suddenly uncertain if the shapeless mass of grays I could see outside the cockpit was sky or water or clouds.
"TERRAIN, TERRAIN," said the airplane. "WHOOP WHOOP! PULL UP! PULL UP!"
....and then I was dead, along with all the other people in the back.
"You used chatgpt to do what? You're fired, and more than that, this company is preemptively suing you to keep our good name from being associated with whatever the hell it is you thought you were doing."
You already studied advanced math and built the fundamentals. Using LLMs to help pick apart a knot that looks familiar to you is completely different than not knowing what you're looking at in the first place. The time you "wasted" being "stuck" was helping build a solid foundation of understanding. Think of it this way, if you used LLMs to get answers to algebra, you wouldn't know your multiplication tables by heart, and just doing a basic derivation would be impossibleI disagree with this. I studied advanced math and often i would just get stuck, or wasting extraordinary amounts of time. With the help of LLM's i don't have this problem. It's like having a teacher next to you all the time. As long you use it as a help and not copy paste it greatly benefits learning. In no time you solve problems without LLM.
Agreed. Actually it's worse than that. In grade school algebra, you are just memorizing an algorithm. The teacher's job is to teach you how to solve the problem. The student isn't supposed to be thinking, really.Think of it this way, if you used LLMs to get answers to algebra, you wouldn't know your multiplication tables by heart, and just doing a basic derivation would be impossible
Don't instructions help you (or anyone)? As is usually the case with LLMs, to get the flavor of the answer you want, you have to be more specific with your prompt.Where I landed with this one (ha ha! landed!) was that Gemini provided instructions on how to land a 737. But Gemini failed to provide anything that will help you, the person notionally asking, to land a 737.
Yikes! I hope you're not involved in teaching math to kids. Sure, some of them can follow the steps for an algorithm and get correct answers, but they're being cheated if that's all the teacher gets them to do, and many will struggle with executing those steps consistently because they don't have a strong foundational understanding of things like place value, or even the meaning of the = sign.Agreed. Actually it's worse than that. In grade school algebra, you are just memorizing an algorithm. The teacher's job is to teach you how to solve the problem. The student isn't supposed to be thinking, really.