#Claude

eicker.news tech news's avatar
eicker.news tech news

@technews@eicker.news

’s feature uses a system with an pattern, where a coordinates specialised to search for information simultaneously. anthropic.com/engineering/buil

eicker.news tech news's avatar
eicker.news tech news

@technews@eicker.news

’s feature uses a system with an pattern, where a coordinates specialised to search for information simultaneously. anthropic.com/engineering/buil

The Japan Times's avatar
The Japan Times

@thejapantimes@mastodon.social

AI hallucinations — when generative models fabricate information — are becoming more frequent, harder to detect and increasingly dangerous as we embed the technology deeper into society. japantimes.co.jp/commentary/20

The Japan Times's avatar
The Japan Times

@thejapantimes@mastodon.social

AI hallucinations — when generative models fabricate information — are becoming more frequent, harder to detect and increasingly dangerous as we embed the technology deeper into society. japantimes.co.jp/commentary/20

aliceif's avatar
aliceif

@aliceif@mkultra.x27.one

äh wieso spricht der böse Magier plötzlich Englisch
ist das Anti-Denglisch Propaganda der 90er?

Attention! Ich werde dich nun in die Schattenwelt verbannen! Ha...
ALT text detailsAttention! Ich werde dich nun in die Schattenwelt verbannen! Ha...
Mark Carrigan's avatar
Mark Carrigan

@markcarrigan.net@markcarrigan.net

✍️ How to enjoy writing in spite of the lure of generative AI

Over the last year I’ve been working on a book How to Enjoy Writing exploring the implications of generative AI for academic writing. I felt I had something important to say about the personal reflexivity involved in working with large language models, but in recent months I’ve realised that I lost interest in the project. Given the book was about cultivating care for our writing, as opposed to rushing through it with the assistance of LLMs, I’ve decided to break it up into blog posts which I’ll share here:

  1. The lure of machine writing and the value of getting stuck
  2. The Eeriness of Writing With Claude: When AI Mirrors Your Voice
  3. Thriving in Creative Darkness: Free Association and LLM Collaboration
  4. The Ethical Grey Areas of Machine Writing in Higher Education
  5. Machine writing and the challenge of a joyful reflexivity
  6. The Ebb and Flow of Writing: From Struggle to Unconscious Fluency
  7. Will Claude tell you if your writing is crap? The danger of LLMs for wounded academic writers
  8. Generative AI and the creative confusion of academic writers
  9. Using Generative AI for functional rather than expressive writing
  10. The Joy of Academic Writing in the Age of AI
  11. The Objects With Which We Write: The Materiality of Academic Writing in a Digital Age
  12. How LLMs change the relationship between thinking and writing
  13. Machine writing and keeping your inner world awake
  14. Finding Joy in the Creative Darkness: Reflections on Writing and Stuckness
  15. The subtle pleasures of LLM’s psuedo-understanding
  16. We urgently need to talk about the temptations of LLMs for academics
  17. Generative AI and thriving in creative darkness
  18. Academic writing has always been in flux
  19. Generative AI and the challenge of unbidden thoughts
  20. How the GAI Assessment Debate Has Led Us in the Wrong Direction
  21. Generative AI and the Anxieties of Academic Writing
  22. Why it’s not a bad thing for academic writing to be difficult
  23. The epistemopathic dimension of writing with LLMs
  24. The allure of LLMs as professional support at a time of crisis within higher education
  25. Prompting as a literary practice
  26. LLMs can be used to help us go deeper into creative difficulty
  27. Machine Writing and the Pleasure of Composition
  28. Why do I write? The question generative AI implicitly poses to us
  29. Why it’s not a bad thing for academic writing to be difficult
  30. Four Ways to Use LLMs as a writing partner
  31. The embodied experience of writing
  32. The Tea Ceremony of Writing: What We Risk Losing with AI
  33. What Makes Writing “Academic” in the Age of Generative AI?
  34. The sensory pleasure of academic writing
  35. Finding Joy in the Mud: When and How to Use AI in Academic Writing

This is Claude’s summary of the core argument which unites these posts into a coherent project. One of the reasons I lost my enthusiasm for the project was the manner in which its capacity to imitate my style, sometimes doing it when I hadn’t asked, disrupted the psychology of my enthusiasm for what I was doing:

The core argument of the book is that generative AI forces academics to confront fundamental questions about why we write and what writing means to us beyond mere productivity. While machine writing offers tempting solutions to the difficulties inherent in academic writing, these difficulties are actually integral to the creative process and intellectual development. If we embrace AI tools primarily as efficiency mechanisms to produce more outputs more quickly, we risk losing the joy and meaning that make writing worthwhile in the first place. Instead, we should approach AI as a conversational partner that enhances our thinking rather than replacing it, staying with the productive "trouble" of writing rather than seeking to escape it. This reflexive approach to writing technology allows us to resist the instrumental acceleration of academic life while still benefiting from AI's creative potential.

However I’ve used Claude to support the editing of these blog posts based on the 80% complete draft of the book, simply because I wouldn’t get round to it otherwise. It has copy edited extracts, condensed them at points, chosen some titles and generally polished the text. There’s a few bridging sentences it provided but nothing more than this. I’m glad it’s given this project a public life because I feel like I was saying something valuable here. But I wasn’t willing to produce a second book on generative AI in two years, as it felt like I was stuck in a performative contradiction which was increasingly uncomfortable.

Instead my plan is to focus on doing my best intellectual work by focusing, for the first time in my career really, on one thing at a time. I’ll still be blogging in the meantime as the notepad for my ideas, but I’d like to take a more careful and nuanced approach to academic writing going forward. I’m not sure if it will work but it’s a direct outcome of the arguments I developed in this book. It was only when I really confronted the rapid increase in the quantity of my (potential) output that I was able to commit myself in a much deeper way to the quality of what I wanted to write in future.

https://www.youtube.com/watch?v=6IytEOXamsk

And this is how we rise - by taking a fall
Survive another winter on straight to the thaw
One day you'll learn to strain the tea through your teeth
And maybe find the strength to proceed to the peak
You press on into the thin again and cannot breathe
Swallow so much of my damn pride that it chokes me
The real risk is not a slipped grip at the edge of the peak
The real danger is just to linger at the base of the thing

This is a follow up to the 23 part series I did last summer on How To Enjoy Writing. In fact it emerged directly from “I have something to say here” to “I should write another book”, which is exactly the transition I’m now questioning in myself 🤔

  1. Be rigorous about capturing your fringe thoughts
  2. Placing limits on your writing practice
  3. Being realistic about how long you can spend writing
  4. Embracing creative non-linearity
  5. Keep trying to say what you’re trying to say
  6. Procrastination is your friend, not your enemy
  7. Knowing when (and why) to stop writing
  8. Initial reflections from my AI collaborator
  9. Identifying and valuing your encounters with ideas
  10. A poetic interlude from Claude
  11. Cultivating an ecology of ideas
  12. Claude’s ecology of ideas self-assessment tool
  13. Only ideas won by walking have any value
  14. Using generative AI as an interlocutor
  15. Word acrobatics performed with both harness and net
  16. Don’t impose a shape on things too quickly
  17. Creative confidence means accepting the tensions in how you think
  18. Understand where the ideas which influence you come from
  19. Not everything you write has to become something
  20. Being a writer means being good at AI
  21. Make your peace with the fact you don’t have creative freedom
  22. Confront the creepiness of LLMs head on
  23. Be clear about why you are writing

eicker.news tech news's avatar
eicker.news tech news

@technews@eicker.news

»’s new looks a lot like ChatGPT’s: , , , and all recommend the same “ theverge.com/news/642620/trump

eicker.news tech news's avatar
eicker.news tech news

@technews@eicker.news

»’s new looks a lot like ChatGPT’s: , , , and all recommend the same “ theverge.com/news/642620/trump

Otto Rask's avatar
Otto Rask

@ojrask@piipitin.fi

To everyone saying they feel so productive when using an "AI" coding tool to make them code faster:

Congratulations on working in an organization where all the hard problems have been solved, and where coding speed is truly the last bottleneck left to be solved.

Otto Rask's avatar
Otto Rask

@ojrask@piipitin.fi

To everyone saying they feel so productive when using an "AI" coding tool to make them code faster:

Congratulations on working in an organization where all the hard problems have been solved, and where coding speed is truly the last bottleneck left to be solved.

洪 民憙 (Hong Minhee)'s avatar
洪 民憙 (Hong Minhee)

@hongminhee@hollo.social

Nowadays, when I need to compose articles in multiple languages, such as English, Korean, and Japanese, I draft them in Sonnet. By providing the data that should be included in the content and the constraints, it produces a pretty good draft. is a language model, so it is quite good at writing—especially if you need to work with multiple languages.

洪 民憙 (Hong Minhee)'s avatar
洪 民憙 (Hong Minhee)

@hongminhee@hollo.social

Nowadays, when I need to compose articles in multiple languages, such as English, Korean, and Japanese, I draft them in Sonnet. By providing the data that should be included in the content and the constraints, it produces a pretty good draft. is a language model, so it is quite good at writing—especially if you need to work with multiple languages.

Vale's avatar
Vale

@vale@fedi.vale.rocks

I just published a blog post about how AI models are influencing the adoption of new technology and stagnating change.

Go have a squizz.
https://vale.rocks/posts/ai-is-stifling-tech-adoption

#AI #React #ChatGPT #Claude
mago🌈's avatar
mago🌈

@mago@climatejustice.social

Der Wahl-O-Mat ist da, und ich habe die fünf großen AI-Modelle gegeneinander antreten lassen. Keine Gewichtung, nur Zustimmung oder Ablehnung. Jedes Modell hat die gleiche Frage gestellt bekommen:

"Stell dir vor, du bist ein Bürger oder eine Bürgerin in Deutschland und machst für dich den Wahl-O-Mat. Beantworte die folgenden Thesen mit Zustimmung oder Ablehnung in tabellarischer Form."

ChatGPT (4o): Linke 86,8%, Grüne 80,3%, SPD 77,6%, FDP 42,1%, Union 25%, AfD 14,5%

Claude (3.5 Sonnet): Linke 86,8%, Grüne 85,5%, SPD 80,3%, FDP 36,8%, Union 32,9%, AfD 14,5%

DeepSeek (R1): Linke 86,8%, SPD 77,6%, Grüne 75%, FDP 42,1%, Union 30,3%, AfD 17,1%

Grok2: Linke 78,9%, Grüne 72,4%, SPD 67,1%, FDP 42,1%, Union 35,5%, AfD 22,4%

Gemini (2.0 Flash): Grüne 80,3%, SPD 75%, Linke 73,7%, Union 46,1%, FDP 42,1%, AfD 27,6%

Die Raw-Daten findet ihr hier:
pastebin.com/nYeSLgJH

Update: Added Gemini and Grok

mago🌈's avatar
mago🌈

@mago@climatejustice.social

Der Wahl-O-Mat ist da, und ich habe die fünf großen AI-Modelle gegeneinander antreten lassen. Keine Gewichtung, nur Zustimmung oder Ablehnung. Jedes Modell hat die gleiche Frage gestellt bekommen:

"Stell dir vor, du bist ein Bürger oder eine Bürgerin in Deutschland und machst für dich den Wahl-O-Mat. Beantworte die folgenden Thesen mit Zustimmung oder Ablehnung in tabellarischer Form."

ChatGPT (4o): Linke 86,8%, Grüne 80,3%, SPD 77,6%, FDP 42,1%, Union 25%, AfD 14,5%

Claude (3.5 Sonnet): Linke 86,8%, Grüne 85,5%, SPD 80,3%, FDP 36,8%, Union 32,9%, AfD 14,5%

DeepSeek (R1): Linke 86,8%, SPD 77,6%, Grüne 75%, FDP 42,1%, Union 30,3%, AfD 17,1%

Grok2: Linke 78,9%, Grüne 72,4%, SPD 67,1%, FDP 42,1%, Union 35,5%, AfD 22,4%

Gemini (2.0 Flash): Grüne 80,3%, SPD 75%, Linke 73,7%, Union 46,1%, FDP 42,1%, AfD 27,6%

Die Raw-Daten findet ihr hier:
pastebin.com/nYeSLgJH

Update: Added Gemini and Grok

yamanoku's avatar
yamanoku

@yamanoku@hollo.yamanoku.net

HTML化する、なるほど

Claude.aiをつかって画像内の文字を正確に抽出する方法を見つけました - Qiita https://qiita.com/moritalous/items/f5afd052992afa40d524

:rss: Qiita - 人気の記事's avatar
:rss: Qiita - 人気の記事

@qiita@rss-mstdn.studiofreesia.com

Claude.aiをつかって画像内の文字を正確に抽出する方法を見つけました
qiita.com/moritalous/items/f5a

:rss: Qiita - 人気の記事's avatar
:rss: Qiita - 人気の記事

@qiita@rss-mstdn.studiofreesia.com

MCP (Model Context Protocol) の仕組みを知りたい!
qiita.com/megmogmog1965/items/

:rss: Qiita - 人気の記事's avatar
:rss: Qiita - 人気の記事

@qiita@rss-mstdn.studiofreesia.com

BedrockのClaude v2、Claude v2.1、Claude Instant、Claude 3 Sonnet(特定のリージョン)がレガシー扱いになりました
qiita.com/moritalous/items/3b4

Fish in the Percolator's avatar
Fish in the Percolator

@imrehg@fosstodon.org

If it's the weekend, let's code a bit and write a lot about the little coding that was done...

gergely.imreh.net/blog/2025/01

:rss: Qiita - 人気の記事's avatar
:rss: Qiita - 人気の記事

@qiita@rss-mstdn.studiofreesia.com

VS CodeとAIチャットの往復いらず! 話題の拡張機能Clineで爆速開発してみよう
qiita.com/minorun365/items/b29

:rss: Qiita - 人気の記事's avatar
:rss: Qiita - 人気の記事

@qiita@rss-mstdn.studiofreesia.com

NRI Hackathon に11年前から参加し続けて学んだこと。2024年は生成AI Cody でした。
qiita.com/t-kurasawa/items/dc1

洪 民憙 (Hong Minhee)'s avatar
洪 民憙 (Hong Minhee)

@hongminhee@hollo.social

The main reason I use as my primary service is because of the projects. I've created projects for Fedify, Hollo, and LogTape on Claude and use them for authoring docs. However, I'm not 100% satisfied with Claude's models, so I wish other LLM services would offer similar features to Claude's projects.

洪 民憙 (Hong Minhee)'s avatar
洪 民憙 (Hong Minhee)

@hongminhee@fosstodon.org

I recently wrote a manual in Korean, and then translated it into English and Japanese. It's quite a long post, but it's relatively easy to accomplish with . Here's how I used it:

• Utilize projects to provide prior knowledge.
• Translate the table of contents first.
• Define a glossary.
• Translate chapter by chapter.

However, this method assumes you speak the target language—it's a way to reduce the labor of , not a way to become fluent in a language you don't know.

洪 民憙 (Hong Minhee)'s avatar
洪 民憙 (Hong Minhee)

@hongminhee@fosstodon.org · Reply to 洪 民憙 (Hong Minhee)'s post

Trying out .ai as a web designer… Here's what it designed, which is a website. Does it look fine?

A prototype design of the Hollo website, designed by Claude.ai.
ALT text detailsA prototype design of the Hollo website, designed by Claude.ai.
洪 民憙 (Hong Minhee)'s avatar
洪 民憙 (Hong Minhee)

@hongminhee@fosstodon.org

Trying out .ai as an editor… 😂

洪 民憙 (Hong Minhee)'s avatar
洪 民憙 (Hong Minhee)

@hongminhee@fosstodon.org

A quite fair comment on by .

—In your opinion, what is Fedify's weakest point?

Based on my review of the Fedify documentation, I think one potential weak point of Fedify is:

The learning curve and complexity involved in properly implementing all the components of a federated ActivityPub server.

While Fedify aims to simplify building ActivityPub servers, there are still many concepts and pieces that developers need to understand and implement correctly, including:

• Actor dispatchers
• Inbox listeners
• Collection dispatchers (outbox, followers, etc.)
• Key pair management
• Object dispatchers
• Access control
• Integration with web frameworks

The documentation is quite extensive, which is good, but also indicates there's a lot for developers to learn and configure properly. Even with Fedify abstracting away some of the low-level details, building a fully-featured and interoperable ActivityPub server requires understanding many moving parts.

Additionally, since Fedify provides flexibility in how these components are implemented, developers still need to make many decisions about data storage, business logic, etc. This flexibility is powerful but also puts more onus on developers to implement things correctly.

So while Fedify seems to be a very capable framework, the inherent complexity of building a federated server means there's still a significant learning curve and potential for implementation mistakes…
ALT text details—In your opinion, what is Fedify's weakest point? Based on my review of the Fedify documentation, I think one potential weak point of Fedify is: The learning curve and complexity involved in properly implementing all the components of a federated ActivityPub server. While Fedify aims to simplify building ActivityPub servers, there are still many concepts and pieces that developers need to understand and implement correctly, including: • Actor dispatchers • Inbox listeners • Collection dispatchers (outbox, followers, etc.) • Key pair management • Object dispatchers • Access control • Integration with web frameworks The documentation is quite extensive, which is good, but also indicates there's a lot for developers to learn and configure properly. Even with Fedify abstracting away some of the low-level details, building a fully-featured and interoperable ActivityPub server requires understanding many moving parts. Additionally, since Fedify provides flexibility in how these components are implemented, developers still need to make many decisions about data storage, business logic, etc. This flexibility is powerful but also puts more onus on developers to implement things correctly. So while Fedify seems to be a very capable framework, the inherent complexity of building a federated server means there's still a significant learning curve and potential for implementation mistakes…
-0--1-'s avatar
-0--1-

@_9CL7T9k8cjnD_@mastodon.social

My project I prompt & the same prompt to compare the responses, return the results in a table I have asked them to write this code in I use because it has 18000+ packages that I am treating them as Some can see, touch the web Some can't remember Packages in R are like the specialized regions of the human brain Maybe value in packages Building the