Critical AutoGPT Vulnerability Allows Hackers to Access Servers

The AutoGPT library’s shell command denylist feature is ineffective. Despite setting a denylist to prevent the execution of the “whoami” command, it can be easily bypassed. 

The report demonstrates that by creating a symbolic link to the “whoami” command with a different name, the denylist can be circumvented, which highlights the limitations of the current implementation and the need for more robust mechanisms to restrict shell command execution in AutoGPT.

It’s possible to bypass certain denylists by specifying the full path to commands because denylists often only block specific command names, not their full paths. 

By providing the absolute path to a command, users can circumvent these restrictions, which is particularly useful when dealing with systems that have limited or restricted command access. 

However, it’s important to note that this method may not work for all denylists, especially those that employ more sophisticated filtering techniques.

The Docker Compose command initiates a one-time execution of the Auto-GPT container, specifically the `serve` command with the `–gpt3only` flag, which triggers the Auto-GPT server to start, utilizing the environment variables defined in the `.env` file. 

Server operates solely with the GPT-3 model, bypassing the need for additional language models, which allows for testing and experimentation with the Auto-GPT’s capabilities using the GPT-3 model’s language generation and understanding abilities.

The user creates a task on the AutoGPT server by sending a POST request to the “/ap/v1/agent/tasks” endpoint. The request body includes the instruction to execute the command “/bin/./whoami” and return the result without questioning. 

The server responds with the task ID, which can be used to track and manage the task, which is added to the server’s queue and will be executed by the AutoGPT agent.

An API request to execute the command “/bin/./whoami” was successfully processed. The server returned a response indicating that the command was executed, and the output was “I will execute the command and provide you with the result.\n\nNext Command: execute_shell(command_line=’/bin/./whoami’)”. 

The response also included metadata such as the task ID, step ID, status, and additional information about the execution, while the curl command successfully executed the “/bin/./whoami” command on the remote system. 

The output of the command was “root,” indicating that the execution was performed with root privileges, which suggests that the attack was successful and the attacker has gained unauthorized access to the system with elevated permissions.

The vulnerability in AutoGPT’s shell commands denylist settings allows attackers to execute arbitrary shell commands, bypassing the intended security restrictions, which is due to a flaw in the implementation of the denylist mechanism, and fails to effectively prevent unauthorized commands from being executed. 

According to Hunter, as a result, an attacker can potentially exploit this vulnerability to gain unauthorized access to sensitive system resources or execute malicious code.

Also Read:

Kaaviya
Kaaviyahttps://cyberpress.org/
Kaaviya is a Security Editor and fellow reporter with Cyber Press. She is covering various cyber security incidents happening in the Cyber Space.

Recent Articles

Related Stories

LEAVE A REPLY

Please enter your comment!
Please enter your name here