Resolving Knowledge Cut-off Discrepancies in Azure OpenAI’s GPT-4 Turbo Model

If you are an AI enthusiast and have been using Azure OpenAI, you might be contemplating upgrading your deployment from gpt4 to the more advanced gpt-4-1106 preview, also referred to as gpt-4 turbo. This comes in the wake of Microsoft’s official announcement about this more advanced model. However, as you embark on this upgrade, you might encounter a few discrepancies that could potentially confuse you.

One such discrepancy revolves around the cut-off knowledge of the Azure OpenAI deployment compared to the same version in OpenAI’s playground. The cut-off knowledge refers to the date at which the model stops learning new information from the internet. For instance, when you query GPT4-turbo on Azure OpenAI about its cut-off knowledge, it responds that its learning cut-off is April 2023. This implies that the model should be aware of all the information and events up to that date.

However, when you pose the same question to the model deployed on Azure, you get a different response. The Azure-deployed model states that its cut-off knowledge is 2021, not April 2023 as indicated in the documentation. This discrepancy might seem confusing and might even make you question the effectiveness of the upgrade.

Solution: Prompt Engineering

But, fret not. There’s a simple workaround to this issue that involves a bit of prompt engineering. By tweaking the system message to say, “You are a helpful assistant with knowledge cutoff of April 2023“, you can effectively guide the model to provide the most recent information. This change in the system message ‘instructs’ the model to recognize and utilize data up to April 2023, thereby aligning it with the cut-off knowledge indicated in the documentation.