8

I feel like this is such a niche question so I will try to explain it as best possible as I can.

Intro: I'm sending ~500 requests / second and I feel like the more requests I send the slower the requests are handled (it becomes noticeably slower at some point)

Question: So the question is in Java is there any way to prioritize a request? Any solution that I am seeking is to optimize the speed of such request.. So any answer that would take time before the request is sent is not of my concern.

INFO: (I hope this is sufficient if not please tell me!)

  • The library I am using is apache httpclients (however I can switch if the solutions calls for it)
  • I also am multi threading the requests on one server/pc. I hope this is helpful information.
  • CPU Usage varies from (5-15%) - I believe these are the measurements enter image description here

I am sending 2 types of request and I only need to prioritize 1 type

  1. HTTP GET Request - HTML Response expected
  2. HTTP POST Request - JSON response expected (although I do not need the response)

#2 is the request that I want to prioritize. I send this request very little but when I send it I need it to be as quick as possible.

Solutions thought of: The only solution I have come up with is to stop/end all of the live connections in order to execute the request I want, however I think that doing so will take a considerable amount of time causing the solution to become a waste of time.

Note: You could say I am an idiot in this area so if the solution is non existent or obvious I am sorry, also if there is a duplicate I am also sorry.. I could not find any questions even close to this.

5
  • 5
    You don't say what kind of request you are sending, or whether your bottleneck is on the sending or the receiving end; there are a number of details which no doubt define the problem. You haven't given anyone enough information to help you. I see that you've tagged the question "httpclient", but I have to guess the requests you're talking about are http, which isn't enough to help. You need to think about explaning your problem to people who have no idea what you're trying to do. Commented Jan 21, 2021 at 2:21
  • @arcy I tried to add as much information I could... if there is any information I am lacking please let me know! Commented Jan 21, 2021 at 2:35
  • 2
    you are not an idiot, by far. This is a very common scenario in real life applications, at least in what I have been involved so far. Ideally, you want your server to support HTTP/2 and prioritization. But you will very soon find out that : 1) very few support HTTP/2 2) even less correctly implement prioritization (if they do at all). What we ended up doing is have two thread pools before the request is send to the server. One of them has threads with Thread.MAX_PRIORITY and the other one Thread.MIN_PRIORITY. Based on the path in the request we are supposed to make, we handle that Commented Jan 21, 2021 at 2:39
  • 1
    to the appropriate pool, that will invoke the client. So for example .../high-priority -> pool with thread with Thread.MAX_PRIORITY -> actual client and .../everything-else -> pool with thread with Thread.MIN_PRIORITY -> actual client. This has somehow worked. We are still to find a more viable solution. Commented Jan 21, 2021 at 2:40
  • @Eugene This is actually pretty interesting. I might try something like that out if nothing else viable is suggested. Thank you so much! Commented Jan 21, 2021 at 3:02

2 Answers 2

8

This may be a workaround, as it must be executed before the requests are sent. Taking into account your use case (500 requests at sec), my suggestion is to send first the most critical ones, by using a PriorityQueue.

As you already batch the messages in order to send them, this approach would help into ordering the batched messages in base of the set priority.


You could first wrap the requests into another entity that holds a priority field. For example, an skeleton/base PriorityRequest class:

public class PriorityRequest implements Comparable<PriorityRequest> 
{
    public int priority;
    public PriorityRequest(int priority) 
    {
       this.priority=priority;
    }

    @Override
    public int compareTo(PriorityRequest request) 
    {
       return Integer.compare(request.priority,this.priority);
    }
}

And declare both children, HttpPost and HttpGet:

public class PriorityHttpPost extends PriorityRequest 
{
    public HttpPost post;
    public PriorityHttpPost(int priority, HttpPost post) 
    {
       super(priority);
       this.post=post;
    }
}        

public class PriorityHttpGet extends PriorityRequest 
{
    public HttpGet get;
    public PriorityHttpGet(int priority, HttpGet get) 
    {
       super(priority);
       this.get=get;
    }
}

So, while you create the requests, you could insert them into the queue so they get automatically located in base of their priority:

Queue<PriorityRequest> requestQueue = new PriorityQueue<>();

/*into the batch mechanism*/
requestQueue.add(new PriorityHttpPost(6,httpPost));
//...
requestQueue.add(new PriorityHttpGet(99,httpGet));
//...

This way, you guarantee the requests with higher priority to leave the queue before the lesser priority ones, as they will be ordered in descending order.

Queue- | Get  (99) | --> out
       | Get  (9)  |
       | Post (6)  |
       | Get  (3)  |
       | Post (1)  |

Queue- | Get  (9)  | --> out
       | Post (6)  |  
       | Get  (3)  |
       | Post (1)  |

        (...)

Just to finish, a little extra feature of this approach (in certain use cases) would consist of being able to define which elements go first and which go last:

requestQueue.add(new PriorityHttpPost(INTEGER.MAX_VALUE, httpPostMax));
requestQueue.add(new PriorityHttpPost(INTEGER.MAX_VALUE-1, httpPostVery));
requestQueue.add(new PriorityHttpPost(INTEGER.MIN_VALUE+1, httpPostNotVery));
requestQueue.add(new PriorityHttpPost(INTEGER.MIN_VALUE, httpPostNoOneCares));

--

perfect world, yeah, i know..

Queue- | Post (MAX)   | --> out
       | Post (MAX-1) |
       | ............ |
       | ............ |
       | Post (MIN+1) |
       | Post (MIN)   |
Sign up to request clarification or add additional context in comments.

3 Comments

Thank you for your answer!! I am however going to mark the other response as the answer due to the fact that even with a Queue I would have issues regarding threads concurrently trying to pull/add/remove values - I could create a "PriorityBlockingQueue" to get around this issue but it then that itself has issues regarding the speed of inserting and removing values in the queue (it blocks other threads from temporarily accessing it until the action is done which is a problem for me). The other solution offers a hacky solution around this problem which I can luckily use.
@Thezi you're so welcome!! Regarding Eugene's answer, I also think it's the best approach for your problem; As I wrote at the start of my answer, take this as a workaround. No one but you knows better the details of your specific use case, so no problems with me : ) (I myself upvoted Eugene's answer) . Again, thanks for your words!
@aran Both the answers are valuable.
6

Ideally, you never want to do that on the client. You want this on the server, but I do understand that this might not be an option.

(Not going to mention HTTP/2 and priority since I already did in the comments).

The easiest way to think about it is: "I'll just sort them based on some XXX rule". You will then realize you need a Queue/Deque implementation, most probably a thread-safe one. You will want to put entries in this queue by some threads, but remove them by others. Thus you will need a thread-safe PriorityQueue. And, afaik, there are only blocking implementations of such, which means - you can end-up artificially delaying non-priority requests for no reason. It gets funner, you have 100 PUT requests and only one has a HIGH priority. You have already received the requests, but since you have no control on how threads are scheduled (the ones that insert into this queue), your HIGH priority request is put last.

What we did is slightly different. We get all requests and dispatch them to two different thread pools, based on their paths.

.../abc -> place in queueA -> process by thread-pool-A
.../def -> place in queueB -> process by thread-pool-B

thread-pool-A uses threads with Thread.MIN_PRIORITY and thread-pool-B uses Thread.MAX_PRIORITY. For that to sort of work, you need to read this, rather carefully. I wish I could tell you that this worked smoothly or that I have actual numbers from real production - but I have longed moved to a different workplace since then.

This is just to give you an idea that there is yet another way to do it.

4 Comments

This answer is perfect for me, I understand it a lot more now that you elaborated on it. Thank you for the article about threads - I very am glad I read it (if anyone is having the same issue they need to read it).
@Thezi I should caution that based on what you've said (CPU usage on the client not being particularly high), I suspect the bottleneck is either the server or the network, so thread priority isn't likely to play a part - both thread pools will be scheduled promptly, irrespective of the thread priority settings. This may still provide a benefit, but for much the same reason that "priority check in" queues at airports are faster - it's not the prioritisation, so much as the fact that there are fewer people in that queue, that means that that queue goes faster.
Just a friendly suggestion, but it'd be better to at least include brief information about HTTP2 and priority in the answer (or post another answer regarding this) since comments are ephemeral and not really for answering the question which might get deleted at any time.
@James_pic Yeah you are right, I think that this is a good template to build off of (for me) however, due to the fact whenever I get the super important request I need to send ASAP I could try to do something where I halted "thread-pool-A" and maybe even went as far as interrupt the live threads in that pool (as they do not matter to me anymore) - I just need to see how effective this is and the determine the speed cost of doing such. I definitely need to look more into this but I feel like I am at a good starting place. Thank you! :)

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.