danpalmer an hour ago

If you write policy about AI you're doing it wrong. AI is an implementation, but policies must be written for outcomes.

Discrimination by law enforcement, exclusion from loan approval, bad moderation on social networking, cheating on exams, creating fake news or media about people, swallowing up user data... all the negative social impact of AI can be achieved without it, and much of it is already illegal anyway.

Legislation that is predicated on AI will fail in the long run. Legislation that focuses on the actual negative outcomes will stand the test of time much more.

  • Cheer2171 an hour ago

    Oh, so we can't address any specific problems with any technology, because we should actually be fixing all of society at the root of all those problems. So while you wait for our broken political system to solve those root causes, enjoy feeling smug about not having implemented any imperfect, temporary bandaids to stop some bleeding.

    Are you working on fixing those root problems? Or after dismissing short term policy bandaids, are you going to go back to working in an industry where you will probably make more money in the short run if governments don't do any tech regulation in the short run?

    Your commitment to the long run will lead to paralysis and do nothing in the long run.

    • danpalmer an hour ago

      If there are problems that are specific to AI then sure we should legislate about it. For example defining what "fair use" is for AI training, that's a clearly new area.

      But most of the pushback I've seen to AI in policy is so over-fit to current AI that it would be trivial to work around it. You can argue that we'd be letting perfect be the enemy of good, but I think we'd be making policies that will be out of date by the time they even make it into law, and that we'll never make any progress at all.

      That said, I'm all for being proven wrong. The US tends to write highly specific legislation so I'm sure it'll try a few of these. The EU tends to write much more vague legislation specifically for this reason. We'll see how they end up working.

MatekCopatek 9 hours ago

It's hard to read this without being cynical.

How seriously would you take a proposal on car pollution regulation and traffic law updates written by Volkswagen?

  • protocolture 8 hours ago

    Am I the US Government in this scenario?

  • SpicyLemonZest 3 hours ago

    If Volkswagen's competitors ran around saying that cars aren't dangerous and there's no need to regulate them, and their critics insisted that you're a mark if you accept the premise that cars are a useful transportation method at all, I don't suppose I'd have a choice but to take it seriously. If you know of a similar analysis from a less conflicted group I'd love to read it!

notatoad 5 hours ago

i see the comments here are pretty cynical about this post, and probably for good reason. especially "you might have to start taxing consumption instead of income because people won't have income anymore"

but at least a couple of these proposals seem to boil down to needing to tax the absolute crap out of the AI companies. which seems pretty obviously true, and its interesting that the ai companies are already saying that.

  • eucyclos 5 hours ago

    I've found large entrenched players tend to prefer slightly more than a reasonable amount of taxation and regulation in any industry; governments are easier to predict and handle than scrappy competitors.

  • varispeed 5 hours ago

    > its interesting that the ai companies are already saying that.

    This is just cheap PR to launder legitimacy and urgency. To create false equivalence between AI agent and an employee.

    I think this is a sign of weakness, having seen AI rolled out in many companies where it already shows signs of being absolute disaster (like summaries changing meaning and losing important details - so tasks go in wrong direction and take time to be corrected, developers creating unprecedented amount of tech debt with their vibe coded features, massive amount of content that sound important, but it is just equivalent of spam, managers spending ours with LLM "researching" strategy feeding the FOMO and so on).

throw-10-13 3 hours ago

Seem’s like the crypto gifters and moon boys have found a new home.

blibble 9 hours ago

they seem to have omitted the scenarios where the newly unemployable electorate turn on them

  • mrshadowgoose 5 hours ago

    I used to think that "AI operating in meatspace" was going to remain a tough problem for a long time, but seeing the dramatic developments in robotics over the last 2 years, it's pretty clear that's not going to be the case.

    As the masses fade into permanent unemployment, this will likely coincide with (and be partially caused by) a corresponding proliferation in intelligent humanoid robots.

    At a certain point, "turning on them" becomes physically impossible.

  • wmf 6 hours ago

    The higher taxes proposed here could be used to buy off the electorate.

robbrown451 6 hours ago

I'm having trouble understanding what they want to "upskill" those people to do.

What skills won't be replaced? The only ones I can think of either have a large physical component, or are only doable by a tiny fraction of the current workforce.

As for the ones with a physical component (plumbers being the most cited), the cognitive parts of the job (the "skilled" part of skilled labor) can be replaced while having the person just following directions demonstrated onscreen for them. And of course, the robots aren't far behind, since the main hard part of making a capable robot is the AI part.

  • visarga 3 hours ago

    I don't think "replaced" is a good word here.. augmented and expanded. With AI we are expanding our activities, users expect more, competition forces companies to do more.

    But AI can't be held liable for its actions, that is one role. It has no direct access to the context it is working in, so it needs humans as a bridge. In the end AI produce outcomes in the same local context, which is for the user. So from intent to guidance to outcomes they are all user based, costs and risks too.

    I find it pessimistic to take that static view on work, as if "that's it, all we needed is invented", and now we are fighting for positions like musical chairs

  • tintor 4 hours ago

    'main hard part of making a capable robot is the AI part'

    Robots are far behind.

    Mechanical hands with human equivalent performance is as hard as the AI part.

    Strong, fast, durable, tough, touch and temp sensitive, dexterous, light, water-proof, energy efficient, non-overheating.

    Muscles and tendons in human hands and forearms self-heal and grow stronger with more use.

    Mechanical tendons stretch and break. Small motors have plenty of issues of their own.

    • robbrown451 4 hours ago

      For most things they don't need to be "human equivalent." I'd be willing to be the current crop of robots we're seeing could do most tasks like vacuuming, cooking, picking up clutter, folding laundry and putting it aways, making beds, touch up painting, gardening etc. It seems to be getting better very fast. And if mechanical tendons break, you replace them. Big deal. You don't even need a person to do the repair.

    • AndrewKemendo 4 hours ago

      And your claim is that those will never be solved?

      As a professional robotics engineer I can tell you for a fact they are coming soon.

      • WastedCucumber 4 hours ago

        There's nothing in that post claiming those problems will never be solved. I understand the claim as "the hardware conponent of robotics needs more work and this will take some time, compared to AI capabilities/software" Or soemthing like that.

        Maybe you could clarify what your experience on the matter is, how the state of th art looks to you, and most of all what timelines you imagine?

      • DaveZale 4 hours ago

        you are talking high cost emvironments, at least for the moment?

        Come on... show me a robot that can run a farm that grows organic produce at an affordable price. It is the lowest wage job out there. Automating it would make prices far out of range for the 99% - but the billionaires could care less?

        • Jensson 20 minutes ago

          You need an AI to do that, affordable robots are already here but the intelligence is not.

  • DaveZale 4 hours ago

    Agree for almost all jobs, but some, like my fathers, was about crawling inside huge metal pieces to do precision machining. For unique piecework, it might not be economical to train AI. Surely equivalents to this exist elsewhere

musicale 3 hours ago

The most immediate impact might be the bursting of the AI bubble and a dotcom-like crash of tech stocks and businesses like Anthropic.

Financial circularity could also lead to instability.

  • Mistletoe 2 hours ago

    This is actually the most likely outcome but it's the quiet part an AI company isn't going to say out loud.

remarkEon 4 hours ago

ctrl + f for "immigration" returns nothing.

Not serious, not worth reading.

  • vkou 3 hours ago

    This is a horn that must be harped on, frequently and loudly.

    Anyone with anxieties over immigration should have those same concerns over AI, many times over.

    Skilled immigrants just got a $100,000/year head tax in the US. Where is such a tax for AI?

    • remarkEon 3 hours ago

      100% and the anxieties are related. If AI is going to start cannibalizing entire classes of employment, then what's the point of high levels of immigration to "support the job market"?

mjbale116 4 hours ago

anthropic are so sure about the incoming economic impact of their AI that they want to start talking about policy - for our sake.

Incredible stuff...

AndrewKemendo 9 hours ago

Much like the end of history wasn’t the end of history

LLM-Attention centric AI isn’t the end of AI development

So if they are successful at locking in it will be at their own demise because it doesn’t cover the infinity many pathways for AI to continue down, specifically intersections with robotics and physical manipulation, that are ultimately way more impactful on society.

Until the plurality of humans on the earth understand that human exceptionalism is no longer something to be taking for granted (and shouldn’t have been) there’s never going to be effective global governance of technology.

  • blind_tomato 7 hours ago

    > Until the plurality of humans on the earth understand that human exceptionalism is no longer something to be taking for granted (and shouldn’t have been) there’s never going to be effective global governance of technology.

    Could you elaborate more on this? FYI fully agreed on the former sentences.

    • icandoit 6 hours ago

      The bacteria that aren't antibacterial resistant eventually get replaced by those that are.

      Maybe you are alcohol, gambling, and pornography resistant but maybe you have friends and family that aren't. Are you picking up their slack?

      What circumstances make "going Amish" look, not just reasonable, but necessary for survival?

    • AndrewKemendo 4 hours ago

      It’s taken for granted that humans are the best choice for how to accomplish tasks.

      The tasks humans are best at now are different than 10kya.

      The world changes, new human jobs are made and humans collectively move up the abstraction chain. Schumpeter called this creative destruction and “capital + technology” is the transition function.

      At the point where “capital + technology” does not need a human anymore and that will happen (if not in my lifetime then at least in the next 500 years) then there will be nothing more to argue for or retain.

      So unless humanity recognizes this and decides to organize as humans (not as europeans, or alabamans or han etc…) then this is the only possible outcome.

      Me personally, I don’t think that’s mathematically/energetically possible for humans to do because we’re not biologically capable of that level of eusocial coordination.

      • ares623 2 hours ago

        As some Australian video game character once said: “as long as there are two people in the world, someone is going to want someone dead”

  • Nasrudith 5 hours ago

    I highly doubt there is ever going to be "effective governance" of technology for multiple reasons. It would require impossible foreknowledge of the impacts of every possible tech and dystopian levels of control to prevent new ideas from coming into play. We cannot even get the direction of impact right a-priori. Even if they had that this dystopia would have to remain both stable and unmutated over generations. All the while their control creates their own counterforce, incentivized to invent tech outside of their control to topple the forces of stagnation.