How to query your Berri Endpoint
True
or False
. The returned format is: Rationale: <model rationale> Answer: <model answer>
.
Can help improve accuracy by up to 30%.model: text-davinci-003
[RECOMMENDED] chat GPT
model: gpt-3.5-turbo
GPT 4
model: gpt-4
T5
model: t5
t5 is recommended for those looking for an on-prem alternative to GPT.Try out the different models for your data: https://play.berri.ai/ 🚀1
.Setting top_k = 1
is the same as saying For a given question, only give the most similar chunk of data to GPT
.
Setting top_k = 2
is the same as saying For a given question, give the top 2 most similar chunk of data to GPT
.history = request.args.get('history')
history = ast.literal_eval(history)
How does this impact your query?
When you pass in history, we summarize it and pass it as additional context (in addition to your query) to the model you selected.