<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0in;
        margin-bottom:.0001pt;
        font-size:11.0pt;
        font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:#0563C1;
        text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
        {mso-style-priority:99;
        color:#954F72;
        text-decoration:underline;}
span.EmailStyle17
        {mso-style-type:personal-compose;
        font-family:"Calibri",sans-serif;
        color:windowtext;}
.MsoChpDefault
        {mso-style-type:export-only;}
@page WordSection1
        {size:8.5in 11.0in;
        margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
        {page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="#0563C1" vlink="#954F72">
<div class="WordSection1">
<p class="MsoNormal">New research from Anthropic examines simulated reasoning (SR) models like DeepSeek's R1, and its own Claude series. In a research paper
<a href="https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fassets.anthropic.com%2Fm%2F71876fabef0f0ed4%2Foriginal%2Freasoning_models_paper.pdf&data=05%7C02%7Ccaice-csse%40eng.auburn.edu%7C466011fac3664b20ede908dd7907c3bd%7Cccb6deedbd294b388979d72780f62d3b%7C0%7C0%7C638799796434866635%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=XEajaMysKg%2F1BhcRN8ZweYxtlWUaNttU1Ks%2BMgblrAM%3D&reserved=0" originalsrc="https://assets.anthropic.com/m/71876fabef0f0ed4/original/reasoning_models_paper.pdf">
posted last week</a>, Anthropic's Alignment Science team demonstrated that these SR models frequently fail to disclose when they've used external help or taken shortcuts, despite features designed to show their "reasoning" process.<o:p></o:p></p>
<p class="MsoNormal"><a href="https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Farstechnica.com%2Fai%2F2025%2F04%2Fresearchers-concerned-to-find-ai-models-hiding-their-true-reasoning-processes%2F&data=05%7C02%7Ccaice-csse%40eng.auburn.edu%7C466011fac3664b20ede908dd7907c3bd%7Cccb6deedbd294b388979d72780f62d3b%7C0%7C0%7C638799796434893612%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=rmacld9e1SSkJli6QQfbEO99EAEJ6eMtc3EgGguKSrQ%3D&reserved=0" originalsrc="https://arstechnica.com/ai/2025/04/researchers-concerned-to-find-ai-models-hiding-their-true-reasoning-processes/">https://arstechnica.com/ai/2025/04/researchers-concerned-to-find-ai-models-hiding-their-true-reasoning-processes/</a>
<o:p></o:p></p>
</div>
</body>
</html>