A few hours ago, Pliny obtained and shared the new system prompt for Claude v3.7, noting only a "few small differences" from the one Anthropic had previously published. Interestingly, the prompt even includes a strawberry-flavoured Easter egg. This level of openness fosters trust—without which there is no foundation for progress.
At around the same time, Pliny also shared details of Grok’s system prompt, which raised concerns about selective filtering and censorship related to two of the most influential figures in business and politics. Additionally, there are examples of Grok generating potentially harmful responses.
I don’t advocate for overbearing regulation that stifles innovation, but I do believe in Responsible AI—built on principles of fairness, transparency, accountability, privacy, security, reliability, inclusiveness and safety.
Where do you stand on this?