It would be a great feature to have the ability to set the ‘temperature’ for the request on chat window. I use 0.3 as it is the sweet spot, where the model does not “hallucinate” but gives always proper code.
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"temperature": 0.3
}'