Blog

  • Can a Git repository contain multiple .gitignore files in the root folder?

    No, a Git repository should only have one .gitignore file in the root folder. The .gitignore file in the root directory of your repository is meant to apply its rules to the entire repository. However, you can have multiple .gitignore files within different subdirectories of your repository, each applying only to the directory in which it is located and its subdirectories. This allows you to have specific ignore rules for different parts of your project.

    For example:

    • / (root directory)
      • .gitignore
      • /src
        • .gitignore
      • /docs
        • .gitignore

    Each .gitignore file can contain rules relevant to the files and directories at its level and below.

  • Resolving the Missing File Explorer Issue in VS Code

    The File Explorer in VS Code is a crucial feature that allows you to navigate through your files and directories effortlessly. However, due to reasons like unintended user actions, a glitch in the software, or a misbehaving extension, the File Explorer can sometimes disappear.

    The Solution: ‘View: Reset View Locations’

    To restore your missing File Explorer, VS Code provides a simple and effective command: ‘View: Reset View Locations’. This command resets the layout of the sidebar and panel, bringing back any missing view, including the File Explorer.

    How to Run ‘View: Reset View Locations’

    Here’s a step-by-step guide on how to execute this command:

    1. Open the Command Palette: The Command Palette is a feature in VS Code that allows you to access all available commands. You can open it by pressing Ctrl+Shift+P on Windows and Linux, or Cmd+Shift+P on macOS.

    2. Type the Command: In the Command Palette, start typing ‘View: Reset View Locations’. As you type, VS Code will suggest commands that match your input. Once the ‘View: Reset View Locations’ command appears, click on it or press Enter to execute it.

    3. Check the File Explorer: After running the command, the layout of your VS Code should reset to its default state. This means the File Explorer should now be visible in the sidebar.

  • Reducing HttpClient Logging in C# Application

    If you have an application code written in C# and use HttpClient in the code, you may encounter instances where HttpClient generates a significant amount of logging information, such as:

    • Start processing HTTP request
    • Sending HTTP request
    • Received HTTP response headers after … ms – 200
    • End processing HTTP request after … ms – 200

    These logs can be overwhelming and make it difficult to identify and troubleshoot issues. In this blog post, we’ll show you how to adjust the logging levels for HttpClient to reduce the amount of logging information generated by your application.

    To remove the logging that you’re seeing in your C# code, you can adjust the logging level for the System.Net.Http.HttpClient namespace in your appsettings.json file. By setting the logging level to Warning or higher, you can prevent the Info level logs from appearing.

    Here’s an example of how you can adjust the logging level in your appsettings.json file:

    {
      "Logging": {
        "LogLevel": {
          "Default": "Information",
          "System.Net.Http.HttpClient": "Warning"
        }
      }
    }
    

    In the above example, the logging level for System.Net.Http.HttpClient is set to Warning, which means that only Warning, Error, and Critical level logs will be displayed for that namespace. The Info level logs will no longer appear.

    After making these changes, restart your application for the new logging settings to take effect.

  • What is the difference between OpenAI Assistants API and Chat Completions API

    Chat Completions API: Basics and Use Cases

    The Chat Completions API is our go-to for generating text completions based on input prompts. It’s built around the concept of Messages and uses specified Models (e.g., GPT-3.5-turbo, GPT-4) to perform completions.

    Characteristics:

    • Stateless: It doesn’t maintain conversation history or state.
    • Manual Management: Requires handling conversation state, tool definitions, and code execution on the developer’s end.

    Ideal for:

    • Simple query-response interactions.
    • Applications where conversation context is either minimal or managed externally.

    Assistants API: Enhanced Capabilities

    The Assistants API represents an evolution, introducing statefulness and context-aware interactions. It’s built on three primary primitives:

    • Assistants: Include a base model plus instructions, tools, and context documents.
    • Threads: Maintain the state of a conversation, allowing for persistent context.
    • Runs: Facilitate the execution of an Assistant on a Thread, supporting textual responses and tool usage.

    Advantages:

    • Thread and Memory Management: Server-side storage of message history and conversation state.
    • Sliding Window: Automatic context window management.
    • Tools Integration: Optional use of tools to enhance capabilities.
    • Knowledge Retrieval: Easy seeding with a knowledge base for efficient information retrieval.
    • Code Interpreter: Supports Python code execution for complex problem-solving.
    • Functions: Allows invocation of third-party tools and APIs.

    Ideal for:

    • Complex conversational applications requiring persistent context.
    • Scenarios involving sophisticated interaction capabilities, like dynamic information retrieval or on-the-fly code execution.

    Choosing Between the APIs

    When deciding which API to use for a project, consider the following:

    • Statefulness: Do you need to maintain a conversation history?
    • Complexity: How complex are the interactions? Do they require context understanding over multiple exchanges?
    • Integration Needs: Will you need to integrate external tools or data sources?
    • Development Overhead: Are you equipped to manage the additional complexity that comes with stateful interactions?

    Conclusion

    Both the Chat Completions API and the Assistants API offer powerful capabilities for developing AI-driven applications. The choice between them should be guided by the specific needs of your project, considering factors like statefulness, interaction complexity, and integration requirements. By leveraging the strengths of each API, we can continue to push the boundaries of what’s possible with AI in our products and services.

  • Fix the “Ports are not Available” Error in Docker

    You might encounter the error docker: Error response from daemon: Ports are not available: exposing port TCP 0.0.0.0:3000 -> 0.0.0.0:0: listen tcp 0.0.0.0:3000: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted. when running a Docker command locally. This error means that Docker cannot bind to port 3000 on your host because it’s already in use by another application or service. Docker needs this port to be free to assign it to the container you’re trying to run. To resolve this issue, follow these steps:

    1. Find the Process Using Port 3000

    First, you need to identify which application is currently using port 3000. You can do this using the following commands, depending on your operating system:

    • On Linux or macOS:

      sudo lsof -i :3000
      

      or

      sudo netstat -tulnlp | grep :3000
      
    • On Windows:

      netstat -ano | findstr :3000
      

    2. Stop the Process Using Port 3000

    Once you’ve identified the process using port 3000, you can choose to stop it to free up the port. How you stop the process will depend on what the process is. For example, if it’s a web server, you might stop it through its control interface or by terminating the process directly.

    • On Linux or macOS, if the process ID (PID) is, for example, 1234, you can stop it using:

      sudo kill 1234
      

      If the process doesn’t stop, you can force it to stop using:

      sudo kill -9 1234
      
    • On Windows, if you’ve identified the PID and wish to stop it, you can use:

      taskkill /PID 1234 /F
      

    3. Run Your Docker Container Again

    Now that port 3000 is free, try running your Docker container again using the same command that previously resulted in the error.

    4. Consider Using a Different Port

    If you cannot stop the process using port 3000 or you need it to keep running, consider mapping your Docker container to a different port on your host. You can do this by modifying the port mapping argument in your Docker run command. For example, to use port 3001 instead, you might use:

    docker run -p 3001:3000 your_image_name
    

    This command tells Docker to map port 3000 inside the container to port 3001 on your host, effectively bypassing the conflict on port 3000.

  • Automatically Run Specific PowerShell Scripts at the Start of Every PowerShell Session

    To automatically run specific PowerShell scripts at the start of every PowerShell session, you can modify your PowerShell profile file. PowerShell profiles are scripts that run at the start of a new PowerShell session. Here’s how to add an external script to your PowerShell startup:

    1. Find or Create Your PowerShell Profile: First, you need to determine if you already have a profile and where it is. Open PowerShell and run the following command:

      $profile
      

      This command will show the path to your current user’s profile for the PowerShell console. If you want to add the script for all users or for the ISE, you might need a different profile path.

    2. Check if the Profile Exists: Check if the profile already exists by running:

      Test-Path $profile
      

      If this returns False, the profile does not exist, and you’ll need to create it.

    3. Create the Profile if Necessary: If the profile doesn’t exist, you can create it by running:

      New-Item -path $profile -type file -force
      
    4. Edit Your Profile: Open the profile file in a text editor. You can do this from PowerShell as well, for example, using Notepad:

      notepad $profile
      
    5. Add Your Script to the Profile: In the profile file, you can add commands to run at startup. To run an external script, use the following command:

      . C:\path\to\your\script.ps1
      

      Replace C:\path\to\your\script.ps1 with the actual path to your script. The dot sourcing operator (.) before the path ensures that the script runs in the current scope, so any functions or variables it defines will be available in your session.

    6. Save and Close the Profile: After adding your script, save the profile file and close the text editor.

    7. Test Your Profile: Open a new PowerShell window to test if your script runs as expected. If there are any errors in the script, they will show up when you start PowerShell.

    Keep in mind that running scripts from an external source can pose a security risk. Make sure you trust the scripts you’re adding to your PowerShell startup. Additionally, if your system’s execution policy is set to restrict script execution, you may need to adjust it to allow your profile and scripts to run. You can check the current execution policy by running Get-ExecutionPolicy, and set a new policy with Set-ExecutionPolicy, though changing execution policies should be done with understanding the security implications.null

  • Get the count of each column of Kusto Table

    In Kusto Query Language (KQL), you can’t directly get the count of each column in a single command. However, you can get the count of each column one by one using the count() function. Here’s an example:

    datatable | summarize count(Column1), count(Column2), count(Column3)

    Replace ‘datatable’ with your table name and ‘Column1’, ‘Column2’, ‘Column3’ with your column names.

    This will give you the count of non-null values in each of these columns. If you want to include null values in your count, you can use dcount() function instead.

    Please note that this will not provide the count of unique values. If you need count of unique values, you can use dcount() function.

  • Configure import order in a Next.js project

    To configure import order in a Next.js project, you can use ESLint with the “sort-imports” or “import/order” rules from the “eslint-plugin-import” plugin. Here’s how you can set it up:

    Install ESLint and the necessary plugins if you haven’t already

    npm install --save-dev eslint eslint-plugin-import

    Then, create a .eslintrc.json file in your project root if it doesn’t exist already. If it does, you can just add the rules to it. In your .eslintrc.json file, you can add the “import/order” rule like so:

    {
      "plugins": ["import"],
      "rules": {
        "import/order": [
          "error",
          {
            "groups": ["builtin", "external", "internal"],
            "pathGroups": [
              {
                "pattern": "react",
                "group": "external",
                "position": "before"
              }
            ],
            "pathGroupsExcludedImportTypes": ["react"],
            "newlines-between": "always",
            "alphabetize": {
              "order": "asc",
              "caseInsensitive": true
            }
          }
        ]
      }
    }

    This will enforce a specific order to your imports:

    • Built-in modules (like fs and path)
    • External modules (like react and axios)
    • Internal modules (your own project’s modules)


    The “newlines-between”: “always” option will enforce having blank lines between each group.

    The “alphabetize” option will sort the imports alphabetically within each group.

    The “pathGroups” option allows you to customize the position of certain modules within their group. In this case, it’s making sure react always comes before other external modules.

    Then, run ESLint to check your project:

    npx eslint --fix

    This will automatically fix any issues with import order in your project according to the rules you’ve set.




    1. Comparing OpenAI and Azure OpenAI Features

      Update: This document has been updated as of December 2023

      In the rapidly evolving landscape of artificial intelligence, OpenAI has positioned itself as a leader in advanced AI models. Microsoft, through Azure OpenAI, has partnered with OpenAI to bring these innovations to a wider audience, integrating them into their cloud platform. Both entities offer a range of AI tools, but with differing availability and features. In this blog, we’ll delve into the latest offerings as of OpenAI DevDay and compare them with what’s available on Azure OpenAI as of December 2023.

      Feature / AspectOpenAI Release DateAzure OpenAI Status
      OpenAI Model GPT 3.5Launched November 2023Available
      OpenAI Model GPT 4Launched February 2023Available
      ChatGPT EnterpriseAugust 28, 2023 Introducing ChatGPT Enterprise Microsoft provides similar services, such as Microsoft 365 Copilot and Bing Chat Enterprise
      OpenAI Model GPT-4 TurboNovember 6, 2023 New models and developer products announced at DevDayUse model gpt-4 and version 1106-Preview, model only available in certain regions.
      OpenAI Model GPT-4 Turbo with Vision(aka gpt-4v)November 6, 2023 New models and developer products announced at DevDayUse model gpt-4 and version vision-preview, model only available in certain regions
      OpenAI Model DALL-E 3November 6, 2023 New models and developer products announced at DevDayAvailable
      GPTsNovember 6, 2023 Introducing GPTsNot available
      Assistants APINovember 6, 2023 New models and developer products announced at DevDayNot available
      ChatGPT Plugins and Advanced Data AnalysisNot available
    2. Resolving Knowledge Cut-off Discrepancies in Azure OpenAI’s GPT-4 Turbo Model

      If you are an AI enthusiast and have been using Azure OpenAI, you might be contemplating upgrading your deployment from gpt4 to the more advanced gpt-4-1106 preview, also referred to as gpt-4 turbo. This comes in the wake of Microsoft’s official announcement about this more advanced model. However, as you embark on this upgrade, you might encounter a few discrepancies that could potentially confuse you.

      One such discrepancy revolves around the cut-off knowledge of the Azure OpenAI deployment compared to the same version in OpenAI’s playground. The cut-off knowledge refers to the date at which the model stops learning new information from the internet. For instance, when you query GPT4-turbo on Azure OpenAI about its cut-off knowledge, it responds that its learning cut-off is April 2023. This implies that the model should be aware of all the information and events up to that date.

      However, when you pose the same question to the model deployed on Azure, you get a different response. The Azure-deployed model states that its cut-off knowledge is 2021, not April 2023 as indicated in the documentation. This discrepancy might seem confusing and might even make you question the effectiveness of the upgrade.

      Solution: Prompt Engineering

      But, fret not. There’s a simple workaround to this issue that involves a bit of prompt engineering. By tweaking the system message to say, “You are a helpful assistant with knowledge cutoff of April 2023“, you can effectively guide the model to provide the most recent information. This change in the system message ‘instructs’ the model to recognize and utilize data up to April 2023, thereby aligning it with the cut-off knowledge indicated in the documentation.