Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unlike Anthropic, OpenAI models don't have a `max_tokens` setting for API calls, so I assume the max token output limit is automatically applied to API calls.

Otherwise the max token output limit stated on the models page would be meaningless.



OpenAI has a `max_tokens` setting. For the /chat/completions api, it defaults to the maximum for a desired model, but for the /completions api, it defaults to 16.

https://platform.openai.com/docs/api-reference/chat/create#c...


Ops. Not sure how I missed that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: