-
Improvement
-
Resolution: Fixed
-
Normal
-
None
The bigquery writer is currently reading one "block" of listens from RabbitMQ and then sending it to BigQuery. This is currently 50 listens at a time, so updating a backlog takes a really long time.
Ideally, the writer would read up to a max number of listens from RabbitMQ, submit a much larger batch, making things much faster. We'll need to take care that we can't possibly lose the listens until they are successfully submitted to BQ. I think this needs careful management of the RabbitMQ consumption acknowledgement.