Setting up OpenAI Codex CLI on DCC
Codex CLI is OpenAI’s coding agent that can be run on a terminal. It can read, change, and run code and assist with agentic AI tasks.
Codex CLI can be used on DCC, either through a SSH terminal or an Open OnDemand Jupyter/RStudio container terminal.
Once you are connected to DCC, enter the following commands on the terminal to first setup Node.js through nvm in your home directory.
Then do,
to reload your shell. Next, install Node.js with,
Now install codex,
Launch the application with,
If this is the first time you are using it, you will see the following setup screen,
_,+_=+*++=+__
_=|+|\_,_==,|_|*|+_
,"+|',;*` "~:+|\;||,
/;|*;/` _;;\||;\
//|`/' ,/,,'*/*\|\
,\|"/` _***''/*'|~\|,
`||"| /:/. _|` "!|'*
~/| | |"|,_\, | */|
^"\ |+~;=====;=|_*|_"\_ | |~|
|\\_|"=" /`\ \|_\,~ \_|
/'| | `""""""" '`/;=|_/^/`
,||_|. ,/|\^/`
\ |,'/__ _.";*/+/
\=\+;*+\~==_++"-_+*`
"~!*=\~__+__**`
Welcome to Codex, OpenAI's command-line coding agent
Sign in with ChatGPT to use Codex as part of your paid plan
or connect an API key for usage-based billing
> 1. Sign in with ChatGPT
Usage included with Plus, Pro, Team, and Enterprise plans
2. Sign in with Device Code
Sign in from another device with a one-time code
3. Provide your own API key
Pay for what you use
Press Enter to continue
If you have a personal OpenAI account and want to use that, go with option (2). You may have to login to your ChatGPT settings and toggle "Enable device code authorization for codex" to get this to work (Thank you, Dr. Elaine Guevara for pointing this out).
If you have an API key from Duke's AI Gateway, create ~/.codex/config.toml with the following content and replace LITELLM_TOKEN with your API key. Alternatively, you may run export LITELLM_TOKEN=<your_api_token> from the command line before you start codex. You may modify parameters such as model and model_reasoning_effort. Thank you, OIT Hot Flamin' Team for sharing this config file.
#:schema https://developers.openai.com/codex/config-schema.json
# View all configuration
# https://github.com/openai/codex/blob/main/codex-rs/config.md
# Set a default model and provider here
model = "gpt-5.3-codex"
model_provider = "litellm"
# Disable analytics
[analytics]
enabled = false
# Prevent access to certain envs
[shell_environment_policy]
exclude = ["VAULT_*", "OP*"]
# The responses API is generally preferred vs chat
[model_providers.litellm]
name = "litellm"
base_url = "https://litellm.oit.duke.edu/v1"
env_key = "LITELLM_TOKEN"
wire_api = "responses"
## Optionally set higher than default retries. This can help if you are being
## rate limited in certain instances
# request_max_retries = 10
# stream_max_retries = 10
# Create custom profiles that target a model and provider
# https://developers.openai.com/codex/config-advanced#profiles
[profiles.gpt-5.2]
model_provider = "litellm"
model = "gpt-5.2"
Now you are all set to use codex. Enjoy!
Important!
If you will be using codex to perform computationally intensive tasks, first request an interactive session on DCC and run it on that.