• Icon: Improvement Improvement
    • Resolution: Won't Fix
    • Icon: Normal Normal
    • None
    • None
    • Web service
    • None

      While in SLO we had a call with the 3scale team and they pointed us to a varnish plugin that does exactly the type of stuff that we want to do. We had identified a few possible race conditions and this solution takes care of these race conditions. So, lets move our implementation of the 3scale stuff to use that varnish plugin so we can use off the shelf recommended software.

      Finally, having a people make a lot of simultaneous requests needs to be handled in the rate limiter. Please evaluate the current ratelimiter changes proposed with Dave and see if further changes are required. Assign dave MBH tickets as needed.

          [MBS-3785] Finish 3scale integration

          Ian McEwen added a comment -

          Unassigning to close, we aren't likely to use 3scale at this time.

          Ian McEwen added a comment - Unassigning to close, we aren't likely to use 3scale at this time.

          Ok, sketched up what me and Rob just discussed. The first option means that people always talk to musicbrainz-server, which in turn talks to 3scale. The second option means people talk to the Varnish cache first, which then does the query to 3scale, and then forwards to the backend.

          Oliver Charles added a comment - Ok, sketched up what me and Rob just discussed. The first option means that people always talk to musicbrainz-server, which in turn talks to 3scale. The second option means people talk to the Varnish cache first, which then does the query to 3scale, and then forwards to the backend.

          I'm assigning this to Rob for further feedback.

          Oliver Charles added a comment - I'm assigning this to Rob for further feedback.

          I've done some work into looking how this works, and here are my findings...

          Firstly, a look at Varnish. Varnish is a HTTP accelerator, but in this case it's being used to essentially provide hooks for certain parts of a HTTP request. For example, doing something special on a GET request and caching the result, doing something special on cache hits, etc.

          So, the Varnish mod gives you 2 options.

          1. Handle the callback yourself, but use a cache. From what I understand with this approach, your web service code does the HTTP request to 3scale, but instead of using 3Scale actual host, you make the call to yourself. Varnish will intercept this - the first call will go to 3scale and get cached for, say, 30 seconds. Within those 30 seconds, all calls just hit the cache. You essentially give users 30 seconds of grace (can change this amount).
          2. Use Varnish to actually do the 3Scale stuff as well. In this configuration, the Varnish VCL first does a request to 3scale (caching it), and if that succeeds passes through to our backend to do the web service request. This means that you never need to worry about 3scale stuff in the server at all (and if people want to run the server locally, they don't have to faff around with disabling 3scale options).


          It's all quite undocumented, so I mostly reverse engineered that from the C module and the example VCLs. However, it doesn't look overly complicated now that I have an idea how it works.


          In regards to the rate limiter, I think we talked about using a per-authorization rate limit - with a high limit. We'll have to figure out what this actually is, but something like 10 simultaneous requests should be OK. CCing djce so we can discuss how this works.

          Oliver Charles added a comment - I've done some work into looking how this works, and here are my findings... Firstly, a look at Varnish. Varnish is a HTTP accelerator, but in this case it's being used to essentially provide hooks for certain parts of a HTTP request. For example, doing something special on a GET request and caching the result, doing something special on cache hits, etc. So, the Varnish mod gives you 2 options. Handle the callback yourself, but use a cache. From what I understand with this approach, your web service code does the HTTP request to 3scale, but instead of using 3Scale actual host, you make the call to yourself. Varnish will intercept this - the first call will go to 3scale and get cached for, say, 30 seconds. Within those 30 seconds, all calls just hit the cache. You essentially give users 30 seconds of grace (can change this amount). Use Varnish to actually do the 3Scale stuff as well. In this configuration, the Varnish VCL first does a request to 3scale (caching it), and if that succeeds passes through to our backend to do the web service request. This means that you never need to worry about 3scale stuff in the server at all (and if people want to run the server locally, they don't have to faff around with disabling 3scale options). It's all quite undocumented, so I mostly reverse engineered that from the C module and the example VCLs. However, it doesn't look overly complicated now that I have an idea how it works. In regards to the rate limiter, I think we talked about using a per-authorization rate limit - with a high limit. We'll have to figure out what this actually is, but something like 10 simultaneous requests should be OK. CCing djce so we can discuss how this works.

            Unassigned Unassigned
            rob Robert Kaye
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved:

                Version Package