• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

quaquel / EMAworkbench / 6014648074
81%
master: 93%

Build:
Build:
LAST BUILD BRANCH: ci_concurrency
DEFAULT BRANCH: master
Ran 29 Aug 2023 04:08PM UTC
Jobs 1
Files 34
Run time 2s
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

29 Aug 2023 03:45PM UTC coverage: 81.097%. Remained the same
6014648074

push

github

EwoutH
SLURM script: Let MPIPoolExecutor manage processes

The error you're encountering is related to how mpi4py's `MPIPoolExecutor` works under the hood.

When you're launching your script with `mpirun`, it's spawning multiple MPI processes. If you use `MPIPoolExecutor`, it tries to spawn additional processes (or threads, in this case) for each MPI process. This is why you see a conflict: you're essentially trying to spawn processes on cores that are already allocated, leading to the "All nodes which are allocated for this job are already filled" error.

Here's how to address this:

1. **Avoid Nested Parallelism**: Don't combine `mpirun` with `MPIPoolExecutor`. Either use the typical MPI approach (using send/receive) or use the `MPIPoolExecutor`.

2. **Using `MPIPoolExecutor` without `mpirun`**: The way the `MPIPoolExecutor` works is that you run your Python script normally (i.e., without `mpirun`), and the `MPIPoolExecutor` will manage the creation and distribution of tasks across the MPI processes.

   Adjust your SLURM script:
   ```bash
   #!/bin/bash
   # Other directives...

   # You don't need mpirun here
   python my_model.py > py_test.log
   ```

3. **Adjust the Code**: In your code, since you're launching without `mpirun`, you don't have to worry about the world size or checking against the number of jobs. The `MPIPoolExecutor` will automatically manage the tasks for you.

4. **Optionally, use `MPI.COMM_WORLD.Spawn`**: If you want more control, you can consider using `MPI.COMM_WORLD.Spawn()` to launch the worker processes instead of using the `MPIPoolExecutor`.

Lastly, always be cautious when working on an HPC environment. Nested parallelism can exhaust resources and potentially harm other users' jobs. Always test on a smaller subset of cores/nodes and make sure to monitor your jobs to ensure they're behaving as expected.

4612 of 5687 relevant lines covered (81.1%)

0.81 hits per line

Jobs
ID Job ID Ran Files Coverage
1 6014648074.1 29 Aug 2023 04:08PM UTC 0
81.1
Source Files on build 6014648074
Detailed source file information is not available for this build.
  • Back to Repo
  • 1a94011a on github
  • Prev Build on multi-node-development (#6014423343)
  • Next Build on multi-node-development (#6023662340)
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc