Skip to main content
Filter by
Sorted by
Tagged with
0 votes
1 answer
64 views

I have a spark job running with 260K tasks. I can check individual task executor computing time from Spark UI. For the purpose of calculate resource usage of the whole job, how to summarize all ...
Brian Mo's user avatar
0 votes
1 answer
120 views

I am using df.cache() to cachce a dataframe and using databricks autoscaling with Min instance as 1 and Max instance as 8. But cache don't work properly here because some executors dies in the middle ...
gaurav narang's user avatar
0 votes
0 answers
53 views

There is a list of objects List<Foo> fooDetails, which I will use to create Bar request corresponding to each element of fooDetails from database and then do a rest template call. public void ...
Anurator's user avatar
  • 125
0 votes
1 answer
98 views

I'm studying Java Core on course DMDEV in Udemy and YouTube. On Level 2 in lessons Concurrent I probably saw mistake in code the tutor was made. But my question to him was still without response. So ...
jd199's user avatar
  • 1
1 vote
1 answer
620 views

My friend's 11yrs old son visited at home with family and let him use my desktop. This generation kids are genius with computer, and installed cloud based game program called Comet Executor. It hidden ...
Justine Win's user avatar
0 votes
1 answer
56 views

For context, I am making an app which communicates with TCP server hosted currently on my local pc. I am using SSLSocket for secure connection. All communications are ok. I have made a special class ...
Balpreet Singh's user avatar
1 vote
1 answer
692 views

The documentation of boost::asio::this_coro::executor states that it is an Awaitable object that returns the executor of the current coroutine. To me this seems somewhat vague as soon as multiple ...
Reizo's user avatar
  • 1,477
2 votes
1 answer
875 views

While experimenting with boost::asio::awaitable and Executors, I keep observing some rather confusing behaviour that I would like to understand better Preparation Please take a look at the follwing ...
Reizo's user avatar
  • 1,477
2 votes
1 answer
306 views

I have some tasks defined as Runnable objects. I want to invoke those tasks by using the invokeAll or invokeAny methods on ExecutorService. The problem is that those methods take a collection of ...
Basil Bourque's user avatar
0 votes
1 answer
60 views

How to check if is true or false ? Using executor but success seems not to change its value.. there is any trick to make it in a 'easy' way ? Im able to reach inside the try catch, but seems that its ...
user16726047's user avatar
0 votes
1 answer
183 views

I`m trying to make an api-checking app that must send requests asynchronously using the pool of bots while termination date is not reached. On startup creates a final List<Bot> is created and ...
IndependenceCR's user avatar
0 votes
0 answers
128 views

I'm developing an app that performs tests on mobile network. The app has different Fragment (Home, Dashboard and Notification). The Home fragment is able to start a periodic test using the Executors ...
Giorgio Torassa's user avatar
1 vote
1 answer
968 views

While working with Executor Service mocking is not working. Following is the code. import java.util.ArrayList; import java.util.List; import java.util.concurrent.ExecutorService; import java.util....
Feku279's user avatar
  • 113
-2 votes
1 answer
296 views

I have a unique requirement for which i am unable to find relevant references online. We usually get huge map of employee objects from a data warehouse application every weekend, where for every ...
Romano's user avatar
  • 51
0 votes
2 answers
634 views

I'm trying to run GroupByTest using Spark standalone mode. I've run it successfully on one machine, when I try to run it on a different machine it works but it's seems like Driver is the only instance ...
Brave's user avatar
  • 329
0 votes
1 answer
235 views

I am a beginner with spark, generally in some executions with a java.lang.OutOfMemoryError: Java heap space is raised: java.lang.OutOfMemoryError: Java heap space at java.base/java.nio.HeapByteBuffer.&...
Tadeo's user avatar
  • 13
1 vote
0 answers
409 views

Using python3 aiogram library for a telegram bot. Is there a way to remove the messages from console generated by the executor? messages like Goodbye! or Updates were skipped successfully. if you ...
OscarAkaElvis's user avatar
1 vote
2 answers
1k views

Do Spark's stages within a job run in parallel? I know that within a job in Spark, multiple stages could be running in parallel, but when I checked, it seems like the executors are doing a context ...
Sungju Kim's user avatar
-1 votes
1 answer
986 views

I am using Spark 2.3.2.3.1.0.0-78. I tried to use spark_session.sparkContext._conf.get('spark.executor.memory') but I only received None. How can I get spark.executor.memory's value?
Sonnh's user avatar
  • 101
1 vote
1 answer
108 views

I'm running a threadpool executor to which I submit some tasks. executor = ThreadPoolExecutor(thread_name_prefix='OMS.oms_thread_', max_workers=16) task = executor.submit(method_to_run, args) I know ...
hoodakaushal's user avatar
  • 1,313
0 votes
1 answer
1k views

I am trying to reason around Spark's default behavior here. Here's my scenario: I am submitting a Spark job on a cluster with one master and 10 core/slave nodes. These core nodes have been randomly ...
Niko's user avatar
  • 850
0 votes
0 answers
67 views

I have configured Spring ThreadPoolTaskExecutor, trying to have in mind the average load of 1,1k rpm. @Qualifier("executor") public Executor asyncExecutor() { this....
Misha Bick's user avatar
1 vote
0 answers
145 views

I have a Spark cluster of 3 servers (1 worker per server = 3 workers). The resources are very much the same across servers (70 cores, 386GB of RAM each). I also have an application that I spark-submit,...
Andreas Lampropoulos's user avatar
0 votes
0 answers
593 views

I am working in a multithreaded environment with many methods to be called using Executors. The code works fine even for large data earlier, but it threw a null pointer exception once. Here is the ...
Gopal Gupta's user avatar
4 votes
1 answer
4k views

i am trying to use gitlab runner installed on my windows machine . But pipeline execution fails with error: ERROR: Job failed (system failure): prepare environment: failed to start process: exec: &...
ketan's user avatar
  • 75
4 votes
0 answers
472 views

I have a task that runs in an endless while loop to read from a queue of numbers and checks if there is a sequence gap. Then, the main thread listens to UDP multicast and if it receives a packet, it ...
pebble unit's user avatar
  • 1,441
6 votes
1 answer
14k views

I am writing a code to run pool executor and use a function with two arguments. args=[(0,users[0]),(1,users[1]),(2,users[2]),(3,users[3]),(4,users[4]),(5,users[5]),(6,users[6])] if __name__ ==...
Ahmed Tantawy's user avatar
0 votes
1 answer
2k views

spark = SparkSession.builder.getOrCreate() spark.sparkContext.getConf().get('spark.executor.instances') # Result: None spark.conf.get('spark.executor.instances') # Result: java.util....
alryosha's user avatar
  • 753
2 votes
0 answers
556 views

i have install apache airflow on localhost using ubuntu. the executor can't be loaded, this is the traceback : [2022-12-20 22:11:13,927] {manager.py:343} WARNING - Ending without manager process. [...
dipretelin's user avatar
0 votes
1 answer
331 views

If I run a simple set of document updates on the CosmosDB container using container.executeBulkOperations(bulkOperations) I'm getting strange suspicious log messages from the underlying Cosmos DB Java ...
Jan Peremský's user avatar
1 vote
2 answers
732 views

this is the method call which should be verified, whether is called. Mockito.verify(messageHandler).handleMessage(message); and this method is called inside prepareContext() method, which is called ...
Hayk  Mkhitaryan's user avatar
0 votes
1 answer
41 views

If I have this: import concurrent.futures def SetParameters(): return { "processVal" : 'q', "keys" : { "key1" : { "fn" : a, ...
Weylin Piegorsch's user avatar
0 votes
3 answers
2k views

I have been working on Spring Boot application where I have limited memory for JVM upto 2 GB. At controller, user sends some request which is handled by my Executors thread pool. It is a light weight ...
Thinker's user avatar
  • 6,952
1 vote
1 answer
198 views

What is the point of creating ExecutorService with single thread in java multithreading means why not to create single separate thread instead of ExecutorService with a single thread? Which is ...
user HP's user avatar
  • 33
2 votes
0 answers
984 views

I am trying to do a forecast of sales using Prophet in my Databricks cluster through a Grouped Map Pandas UDF. The problem is that each time I run it, either two or one executors get stuck running ...
Philippe Alain Sigue's user avatar
0 votes
1 answer
2k views

I'm in an organization where Hadoop/Spark is available, but I can't alter its configuration and can't access it either. So, as a client code, I was wondering if I can ask Spark to retry failing tasks ...
tomoyo255's user avatar
  • 303
1 vote
2 answers
383 views

I would like to know which one should i use in the particular scenario: there are several tasks, usually 400k tasks to process. most of the tasks took less than 4 sec to process but some of them (300 ...
Prakash Kumar's user avatar
-2 votes
2 answers
664 views

I have tested the below callable sample code ExecutorService executorService = Executors.newSingleThreadExecutor(); Future<String> futureResMap = executorService.submit(new Callable<String>...
Saravanakumar Arunachalam's user avatar
0 votes
1 answer
224 views

I am running an amazon emr cluster with 20 spark applications having cluster configurations as 1 master node and 2 worker node as c5.24xlarge instance. Giving 3 executors and one driver to each ...
sriparth's user avatar
0 votes
1 answer
498 views

I've implemented apache zeppelin 0.10.1 on kubernetes. It uses spark version 3.2.1 . My problem is that the executors cant communicate with each other while shuffling, but still can exchange data with ...
dontoronto's user avatar
0 votes
1 answer
409 views

I've got simple test code: public static void main(String[] args) { CompletionService<Integer> cs = new ExecutorCompletionService<>(Executors.newCachedThreadPool()); cs.submit(new ...
Troskyvs's user avatar
  • 8,277
0 votes
1 answer
840 views

I am piping the partitions of an RDD through an external executable. I use sparkContext.addFiles() so that the executable will be available to the workers. I when I attempt to run the code I am ...
mikelus's user avatar
  • 1,074
2 votes
1 answer
4k views

Ok, I don't have enough code yet for a fully working program, but I'm already running into issues with "executors". EDIT: this is Boost 1.74 -- Debian doesn't give me anything more current. ...
Christian Stieber's user avatar
1 vote
2 answers
2k views

I have a task that is IO bound running in a loop. This task does a lot of work and is often times hogging the loop (Is that the right word for it?). My plan is to run it in a separate process or ...
Jeremiah Payne's user avatar
0 votes
1 answer
283 views

I do not understand this option. It seems like its the maximum number of executors. If there is not enough memory on the nodes in the cluster this number is doing nothing and there are fewer executors ...
idan ahal's user avatar
  • 943
3 votes
2 answers
1k views

I am getting two types of errors in running a job on Google dataproc and it is causing executors to be lost one by one until the last executor is lost and the job fails. I have set my master node to ...
jmuth's user avatar
  • 71
0 votes
1 answer
588 views

I have a very simple java code snippet: ExecutorService executor = null; try { executor = Executors.newFixedThreadPool(4); for (int i = 0; i < 10; ...
Peter's user avatar
  • 133
1 vote
1 answer
545 views

The sample code is taken from the Java Philosophy 2015 book and it used Java SE5/6. I used JDK11, maybe this code is not suitable for the new version, but why? public static void main(String[] args) { ...
General Esdeath's user avatar
2 votes
0 answers
277 views

I'm working with Spark and Yarn on an Azure HDInsight cluster, and I have some troubles on understanding the relations between the workers' resources, executors and containers. My cluster has 10 ...
andream's user avatar
  • 43
0 votes
0 answers
1k views

Spark Application deploy mode:standalone I want to know why the input data is same but the computing time is so different for a task between two different "WordCount" code. For example: 1....
wl2top's user avatar
  • 9

1
2 3 4 5
12