Thankful Thursday

Jan. 8th, 2026 08:45 pm
mdlbear: Wild turkey hen close-up (turkey)
[personal profile] mdlbear

Today I am thankful for...

Snowflake Challenge #4

Jan. 8th, 2026 10:56 am
snickfic: (snowflake)
[personal profile] snickfic
Challenge #4: Rec The Contents Of Your Last Page. Any website that you like, be it fanfiction, art, social media, or something a bit more eccentric!

We all know about Connections and Wordle, but here are some browser games that last longer and are great for keeping from going insane during Zoom meetings:

2048 Cupcakes. I still play 2048 in times of need, but it's so much more fun with colorful cupcakes.

Squares. If you like word games, here you go. Find all the words in the four by four grid. The dictionary this game uses is highly idiosyncratic, which can be frustrating; how is THIS a word that counts but THAT is only a bonus word?? But it does add to the challenge!
petra: Barbara Gordon smiling knowingly (Default)
[personal profile] petra
Drabbles and limericks for people who requested them:
Chrestomanci
due South + Murderbot
due South + Venom
Interview with the Vampire (TV)
KPop Demon Hunters
Pride and Prejudice
Singin' in the Rain
Slough House
Star Wars

Prompt me if you would you like something in one or more of my fandoms. I may not get to you today, but we can have Even More Joy Day tomorrow!
[syndicated profile] smbc_comics_feed

Posted by Zach Weinersmith



Click here to go see the bonus panel!

Hovertext:
And of course the anti-hallucinogenic drugs that sometimes have tiny legs and walk around.


Today's News:
dolorosa_12: (ada shelby)
[personal profile] dolorosa_12
[community profile] snowflake_challenge prompt 4 asks the following:

Rec The Contents Of Your Last Page

Any website that you like, be it fanfiction, art, social media, or something a bit more eccentric!


Given that the last non-work website that I looked at was a somewhat grim political podcast, I'm going to reinterpret this as an opportunity to link a weird and wonderful piece of longform journalism that I've had bookmarked for a while: The snail farm don: is this the most brazen tax avoidance scheme of all time?

The title doesn't do it justice, and neither does my summary: a septugenarian who made his money in his family's shoe-selling business empire in the north of England, and has decades-long associations with the mafia in Naples (including hiding mafia members on the run in his properties in the UK) has for the past several years invested most of his time and energy in exploiting an elaborate UK tax loophole by which — if you claim to be running a snail farm on your property (including in residential blocks of flats or office buildings) — you pay no tax. In his telling, he's doing this purely to pass the time and keep his mind active in his later years. It's a wild ride.

This kind of written long-form journalism, essay or interview — with left-field subject matter and larger-than-life personalities — is my absolutely favourite type of nonfiction.

Snowflake Challenge: A warmly light quaint street of shops at night with heavy snow falling.
katiedid717: (Default)
[personal profile] katiedid717 posting in [community profile] agonyaunt
My Grandchildren Don’t Thank Me for Christmas Gifts. Is This a Moral Failure?

My grandchildren are in or nearing their teenage years. Two are from my son and his wife, and two are from my daughter and her husband. Of course, all children love and, to some extent, expect birthday and Christmas gifts. My daughter-in-law and her children continue a tradition of giving me handmade greeting cards every Christmas. They also always send me handwritten thank-you cards for the gifts I send. However, I receive no gifts from my other grandchildren, both boys, and never thank-you cards.

I mentioned this to my daughter, their mother, but there was no response. I suggested that each might give me a card promising 30 minutes of picking up sticks in my yard. I know that gifts should come from the heart with no sense of reciprocity, but the current situation bothers me. There seems to be a lack of moral character being demonstrated, as well as poor ethics and manners.

What do you think?


From the Therapist: You’ve framed your grandsons’ behavior as a case of bad manners or moral failure, but I hear a yearning underneath. No matter how much we tell ourselves that gifts aren’t about reciprocity, the reality is that they often hold emotional significance in which both parties are essentially asking to be recognized. The giver wants acknowledgment of their thoughtfulness and investment, while the receiver wants confirmation that they’ve been truly seen. Both are essentially asking, “Do I matter?”

When we don’t feel seen or appreciated, hurt feelings can disguise themselves as something else, like concern about good character or proper etiquette, because it’s easier to push pain outward than to say, “I feel unimportant to you.” But remember that children take cues from their parents, and I have a feeling that this lack of acknowledgment has more to do with your daughter than with her sons.

For instance, you mentioned that you got no response from her when you brought this up. But instead of telling her what her children should do for you, I’d be curious about why she doesn’t facilitate gift-giving or thank-you-note-writing. I say “she” because most teens don’t do this without some parental prodding, and I imagine that your daughter has her own feelings about your relationship that are being played out in the gifting dynamic.

Maybe gifting between you and her family feels empty or performative, when what she really wants is a different or more meaningful relationship with you. It could be that she perceives you as critical of both her and her sons, demanding of something that she doesn’t feel she or they owe you. She might also find your suggestion that the boys pick up sticks for you as a bit thoughtless: Would it make you happy to ask her children to do something that would feel more like a burdensome chore than something they would actually enjoy giving you?

Meanwhile, you say that your “daughter-in-law and her children” give you cards and write thank-you notes, but I noticed you don’t mention your son. It’s nice that your daughter-in-law has created traditions for her kids around gifting, but this doesn’t mean that her children have stronger characters than your daughter’s children do. It just means that the person your son married facilitates gifting and thanking — and that your son and your daughter don’t.

So what might help? First, separate your hurt feelings from judgments about character. You can feel unappreciated without that meaning that these boys are being raised poorly — or that this is primarily about them. Second, consider what you actually want. Do you want thank-you notes, or do you want to feel more connected to and valued by this branch of the family? If it’s the former, you could issue an ultimatum (no thank-you notes equals no gifts), but I don’t think forced statements of gratitude are what you really want. If you want genuine connection and appreciation, you can start by approaching your daughter with curiosity instead of complaints.

Ask a Manager: Two Tales of Nudity

Jan. 8th, 2026 10:05 am
minoanmiss: plus size lady crowned with flowers (Neolithic Summer)
[personal profile] minoanmiss posting in [community profile] agonyaunt
Well, two tales of skimpy clothing, to compare and contrast.

Read more... )
aurumcalendula: Root and Shaw with a blue background (Four Alarm Fire)
[personal profile] aurumcalendula
(belated) January 6th - 'what are your three favorite F/F pairings from live-action media?' For [personal profile] maggie33

Read more... )

(there are still slots open for the January Talking Meme here)
[syndicated profile] bruce_schneier_feed

Posted by Bruce Schneier

Leaders of many organizations are urging their teams to adopt agentic AI to improve efficiency, but are finding it hard to achieve any benefit. Managers attempting to add AI agents to existing human teams may find that bots fail to faithfully follow their instructions, return pointless or obvious results or burn precious time and resources spinning on tasks that older, simpler systems could have accomplished just as well.

The technical innovators getting the most out of AI are finding that the technology can be remarkably human in its behavior. And the more groups of AI agents are given tasks that require cooperation and collaboration, the more those human-like dynamics emerge.

Our research suggests that, because of how directly they seem to apply to hybrid teams of human and digital workers, the most effective leaders in the coming years may still be those who excel at understanding the timeworn principles of human management.

We have spent years studying the risks and opportunities for organizations adopting AI. Our 2025 book, Rewiring Democracy, examines lessons from AI adoption in government institutions and civil society worldwide. In it, we identify where the technology has made the biggest impact and where it fails to make a difference. Today, we see many of the organizations we’ve studied taking another shot at AI adoption—this time, with agentic tools. While generative AI generates, agentic AI acts and achieves goals such as automating supply chain processes, making data-driven investment decisions or managing complex project workflows. The cutting edge of AI development research is starting to reveal what works best in this new paradigm.

Understanding Agentic AI

There are four key areas where AI should reliably boast superhuman performance: in speed, scale, scope and sophistication. Again and again, the most impactful AI applications leverage their capabilities in one or more of these areas. Think of content-moderation AI that can scan thousands of posts in an instant, legislative policy tools that can scale deliberations to millions of constituents, and protein-folding AI that can model molecular interactions with greater sophistication than any biophysicist.

Equally, AI applications that don’t leverage these core capabilities typically fail to impress. For example, Google’s AI Overviews irritate many of its users when the overviews obscure information that could be more efficiently consumed straight from the web results that the AI attempted to synthesize.

Agentic AI extends these core advantages of AI to new tasks and scenarios. The most familiar AI tools are chatbots, image generators and other models that take a single action: ask one question, get one answer. Agentic systems solve more complex problems by using many such AI models and giving each one the capability to use tools like retrieving information from databases and perform tasks like sending emails or executing financial transactions.

Because agentic systems are so new and their potential configurations so vast, we are still learning which business processes they will fit well with and which they will not. Gartner has estimated that 40 per cent of agentic AI projects will be cancelled within two years, largely because they are targeted where they can’t achieve meaningful business impact.

Understanding Agentic AI behavior

To understand the collective behaviors of agentic AI systems, we need to examine the individual AIs that comprise them. When AIs make mistakes or make things up, they can behave in ways that are truly bizarre. But when they work well, the reasons why are sometimes surprisingly relatable.

Tools like ChatGPT drew attention by sounding human. Moreover, individual AIs often behave like individual people, responding to incentives and organizing their own work in much the same ways that humans do. Recall the counterintuitive findings of many early users of ChatGPT and similar large language models (LLMs) in 2022: They seemed to perform better when offered a cash tip, told the answer was really important or were threatened with hypothetical punishments.

One of the most effective and enduring techniques discovered in those early days of LLM testing was ‘chain-of-thought prompting,’ which instructed AIs to think through and explain each step of their analysis—much like a teacher forcing a student to show their work. Individual AIs can also react to new information similar to individual people. Researchers have found that LLMs can be effective at simulating the opinions of individual people or demographic groups on diverse topics, including consumer preferences and politics.

As agentic AI develops, we are finding that groups of AIs also exhibit human-like behaviors collectively. A 2025 paper found that communities of thousands of AI agents set to chat with each other developed familiar human social behaviors like settling into echo chambers. Other researchers have observed the emergence of cooperative and competitive strategies and the development of distinct behavioral roles when setting groups of AIs to play a game together.

The fact that groups of agentic AIs are working more like human teams doesn’t necessarily indicate that machines have inherently human-like characteristics. It may be more nurture than nature: AIs are being designed with inspiration from humans. The breakthrough triumph of ChatGPT was widely attributed to using human feedback during training. Since then, AI developers have gotten better at aligning AI models to human expectations. It stands to reason, then, that we may find similarities between the management techniques that work for human workers and for agentic AI.

Lessons From the Frontier

So, how best to manage hybrid teams of humans and agentic AIs? Lessons can be gleaned from leading AI labs. In a recent research report, Anthropic shared the practical roadmap and published lessons learned while building its Claude Research feature, which uses teams of multiple AI agents to accomplish complex reasoning tasks. For example, using agents to search the web for information and calling external tools to access information from sources like emails and documents.

Advancements in agentic AI enabling new offerings like Claude Research and Amazon Q are causing a stir among AI practitioners because they reveal insights from the frontlines of AI research about how to make agentic AI and the hybrid organizations that leverage it more effective. What is striking about Anthropic’s report is how transparent it is about all the hard-won lessons learned in developing its offering—and the fact that many of these lessons sound a lot like what we find in classic management texts:

LESSON 1: DELEGATION MATTERS.

When Anthropic analyzed what factors lead to excellent performance by Claude Research, it turned out that the best agentic systems weren’t necessarily built on the best or most expensive AI models. Rather, like a good human manager, they need to excel at breaking down and distributing tasks to their digital workers.

Unlike human teams, agentic systems can enlist as many AI workers as needed, onboard them instantly and immediately set them to work. Organizations that can exploit this scalability property of AI will gain a key advantage, but the hard part is assigning each of them to contribute meaningful, complementary work to the overall project.

In classical management, this is called delegation. Any good manager knows that, even if they have the most experience and the strongest skills of anyone on their team, they can’t do it all alone. Delegation is necessary to harness the collective capacity of their team. It turns out this is crucial to AI, too.

The authors explain this result in terms of ‘parallelization’: Being able to separate the work into small chunks allows many AI agents to contribute work simultaneously, each focusing on one piece of the problem. The research report attributes 80 per cent of the performance differences between agentic AI systems to the total amount of computing resources they leverage.

Whether or not each individual agent is the smartest in the digital toolbox, the collective has more capacity for reasoning when there are many AI ‘hands’ working together. In addition to the quality of the output, teams working in parallel get work done faster. Anthropic says that reconfiguring its AI agents to work in parallel improved research speed by 90 per cent.

Anthropic’s report on how to orchestrate agentic systems effectively reads like a classical delegation training manual: Provide a clear objective, specify the output you expect and provide guidance on what tools to use, and set boundaries. When the objective and output format is not clear, workers may come back with irrelevant or irreconcilable information.

LESSON 2: ITERATION MATTERS.

Edison famously tested thousands of light bulb designs and filament materials before arriving at a workable solution. Likewise, successful agentic AI systems work far better when they are allowed to learn from their early attempts and then try again. Claude Research spawns a multitude of AI agents, each doubling and tripling back on their own work as they go through a trial-and-error process to land on the right results.

This is exactly how management researchers have recommended organizations staff novel projects where large teams are tasked with exploring unfamiliar terrain: Teams should split up and conduct trial-and-error learning, in parallel, like a pharmaceutical company progressing multiple molecules towards a potential clinical trial. Even when one candidate seems to have the strongest chances at the outset, there is no telling in advance which one will improve the most as it is iterated upon.

The advantage of using AI for this iterative process is speed: AI agents can complete and retry their tasks in milliseconds. A recent report from Microsoft Research illustrates this. Its agentic AI system launched up to five AI worker teams in a race to finish a task first, each plotting and pursuing its own iterative path to the destination. They found that a five-team system typically returned results about twice as fast as a single AI worker team with no loss in effectiveness, although at the cost of about twice as much total computing spend.

Going further, Claude Research’s system design endowed its top-level AI agent—the ‘Lead Researcher’—with the decision authority to delegate more research iterations if it was not satisfied with the results returned by its sub-agents. They managed the choice of whether or not they should continue their iterative search loop, to a limit. To the extent that agentic AI mirrors the world of human management, this might be one of the most important topics to watch going forward. Deciding when to stop and what is ‘good enough’ has always been one of the hardest problems organizations face.

LESSON 3: EFFECTIVE INFORMATION SHARING MATTERS.

If you work in a manufacturing department, you wouldn’t rely on your division chief to explain the specs you need to meet for a new product. You would go straight to the source: the domain experts in R&D. Successful organizations need to be able to share complex information efficiently both vertically and horizontally.

To solve the horizontal sharing problem for Claude Research, Anthropic innovated a novel mechanism for AI agents to share their outputs directly with each other by writing directly to a common file system, like a corporate intranet. In addition to saving on the cost of the central coordinator having to consume every sub-agent’s output, this approach helps resolve the information bottleneck. It enables AI agents that have become specialized in their tasks to own how their content is presented to the larger digital team. This is a smart way to leverage the superhuman scope of AI workers, enabling each of many AI agents to act as distinct subject matter experts.

In effect, Anthropic’s AI Lead Researchers must be generalist managers. Their job is to see the big picture and translate that into the guidance that sub-agents need to do their work. They don’t need to be experts on every task the sub-agents are performing. The parallel goes further: AIs working together also need to know the limits of information sharing, like what kinds of tasks don’t make sense to distribute horizontally.

Management scholars suggest that human organizations focus on automating the smallest tasks; the ones that are most repeatable and that can be executed the most independently. Tasks that require more interaction between people tend to go slower, since the communication not only adds overhead, but is something that many struggle to do effectively.

Anthropic found much the same was true of its AI agents: “Domains that require all agents to share the same context or involve many dependencies between agents are not a good fit for multi-agent systems today.” This is why the company focused its premier agentic AI feature on research, a process that can leverage a large number of sub-agents each performing repetitive, isolated searches before compiling and synthesizing the results.

All of these lessons lead to the conclusion that knowing your team and paying keen attention to how to get the best out of them will continue to be the most important skill of successful managers of both humans and AIs. With humans, we call this leadership skill empathy. That concept doesn’t apply to AIs, but the techniques of empathic managers do.

Anthropic got the most out of its AI agents by performing a thoughtful, systematic analysis of their performance and what supports they benefited from, and then used that insight to optimize how they execute as a team. Claude Research is designed to put different AI models in the positions where they are most likely to succeed. Anthropic’s most intelligent Opus model takes the Lead Researcher role, while their cheaper and faster Sonnet model fulfills the more numerous sub-agent roles. Anthropic has analyzed how to distribute responsibility and share information across its digital worker network. And it knows that the next generation of AI models might work in importantly different ways, so it has built performance measurement and management systems that help it tune its organizational architecture to adapt to the characteristics of its AI ‘workers.’

Key Takeaways

Managers of hybrid teams can apply these ideas to design their own complex systems of human and digital workers:

DELEGATE.

Analyze the tasks in your workflows so that you can design a division of labour that plays to the strength of each of your resources. Entrust your most experienced humans with the roles that require context and judgment and entrust AI models with the tasks that need to be done quickly or benefit from extreme parallelization.

If you’re building a hybrid customer service organization, let AIs handle tasks like eliciting pertinent information from customers and suggesting common solutions. But always escalate to human representatives to resolve unique situations and offer accommodations, especially when doing so can carry legal obligations and financial ramifications. To help them work together well, task the AI agents with preparing concise briefs compiling the case history and potential resolutions to help humans jump into the conversation.

ITERATE.

AIs will likely underperform your top human team members when it comes to solving novel problems in the fields in which they are expert. But AI agents’ speed and parallelization still make them valuable partners. Look for ways to augment human-led explorations of new territory with agentic AI scouting teams that can explore many paths for them in advance.

Hybrid software development teams will especially benefit from this strategy. Agentic coding AI systems are capable of building apps, autonomously making improvements to and bug-fixing their code to meet a spec. But without humans in the loop, they can fall into rabbit holes. Examples abound of AI-generated code that might appear to satisfy specified requirements, but diverges from products that meet organizational requirements for security, integration or user experiences that humans would truly desire. Take advantage of the fast iteration of AI programmers to test different solutions, but make sure your human team is checking its work and redirecting the AI when needed.

SHARE.

Make sure each of your hybrid team’s outputs are accessible to each other so that they can benefit from each others’ work products. Make sure workers doing hand-offs write down clear instructions with enough context that either a human colleague or AI model could follow. Anthropic found that AI teams benefited from clearly communicating their work to each other, and the same will be true of communication between humans and AI in hybrid teams.

MEASURE AND IMPROVE.

Organizations should always strive to grow the capabilities of their human team members over time. Assume that the capabilities and behaviors of your AI team members will change over time, too, but at a much faster rate. So will the ways the humans and AIs interact together. Make sure to understand how they are performing individually and together at the task level, and plan to experiment with the roles you ask AI workers to take on as the technology evolves.

An important example of this comes from medical imaging. Harvard Medical School researchers have found that hybrid AI-physician teams have wildly varying performance as diagnosticians. The problem wasn’t necessarily that the AI has poor or inconsistent performance; what mattered was the interaction between person and machine. Different doctors’ diagnostic performance benefited—or suffered—at different levels when they used AI tools. Being able to measure and optimize those interactions, perhaps at the individual level, will be critical to hybrid organizations.

In Closing

We are in a phase of AI technology where the best performance is going to come from mixed teams of humans and AIs working together. Managing those teams is not going to be the same as we’ve grown used to, but the hard-won lessons of decades past still have a lot to offer.

This essay was written with Nathan E. Sanders, and originally appeared in Rotman Management Magazine.