• Login
    • Search
    • Categories
    • Recent
    • Tags
    • Users
    • Groups
    • Rules
    • Help

    Do more on the web, with a fast and secure browser!

    Download Opera browser with:

    • built-in ad blocker
    • battery saver
    • free VPN
    Download Opera

    Local LLM not working?

    Feedback
    3
    3
    1337
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • RAMarianHaertel
      RAMarianHaertel last edited by

      I need some help to get it running,

      I installed two local LLM, Gemma and Mistral. Both downloaded, but whatever I entered, there is not reply (it is emptly) from the local LLM. What could be the reason?

      Also I am trying to find a data protection relevant use case to have the local LLM analyze and summarize a document. Is this already doable with this solution?

      Reply Quote 0
        JoostD rikazkhan7 2 Replies Last reply
      • Moved from Opera - browser AI by  leocg leocg 
      • JoostD
        JoostD @RAMarianHaertel last edited by

        @ramarianhaertel I'm having the same issue. I inspected the aria sidebar and see this error in my console from aria.js:

        http://localhost:11435/api/chat 404 (Not Found)

        I suspect there is an issue initializing the models on certain machines and operating systems. Perhaps Apple isn't supported at this time (I'm running on an M2 mac).

        Reply Quote 0
          1 Reply Last reply
        • rikazkhan7
          rikazkhan7 Banned @RAMarianHaertel last edited by

          @ramarianhaertel Troubleshooting Gemma and Mistral

          1. No Reply from Local LLMs
            If you're not receiving any response from Gemma and Mistral after entering input, there could be a few reasons for this:

          Installation Issues: Double-check that the installations were successful and that there were no errors during the process. Sometimes, installation issues can prevent the models from running correctly.
          Environment Setup: Ensure that your environment is set up properly to run the local LLMs. This includes having the necessary dependencies installed and configured correctly.
          Input Format: Make sure you're providing input in the correct format expected by the models. Depending on how Gemma and Mistral are designed, they might require input in a specific format or structure.

          Reply Quote 0
            1 Reply Last reply
          • First post
            Last post

          Computer browsers

          • Opera for Windows
          • Opera for Mac
          • Opera for Linux
          • Opera beta version
          • Opera USB

          Mobile browsers

          • Opera for Android
          • Opera Mini
          • Opera Touch
          • Opera for basic phones

          • Add-ons
          • Opera account
          • Wallpapers
          • Opera Ads

          • Help & support
          • Opera blogs
          • Opera forums
          • Dev.Opera

          • Security
          • Privacy
          • Cookies Policy
          • EULA
          • Terms of Service

          • About Opera
          • Press info
          • Jobs
          • Investors
          • Become a partner
          • Contact us

          Follow Opera

          • Opera - Facebook
          • Opera - Twitter
          • Opera - YouTube
          • Opera - LinkedIn
          • Opera - Instagram

          © Opera Software 1995-2025