Skip to main content

Rucio Daemons

Rucio relies on several daemons (processes) to perform different logic. Most of the daemons connect to the DB to read some data, perform some computation, and then write some data back into the DB.

Usually one daemon will create some work for another daemon and vice-versa. In Rucio realm, daemons communicate to others by the DB. The following table represents a high level view of the responsibility of each of the daemons and basic functionality.

Daemons

NameDomainPurposeBatched?Threaded?Multi-VO?Details
rucio-abacus-accountAccountingAccount usageDetails
rucio-abacus-collection-replicaAccountingUpdates collection replicasDetails
rucio-abacus-rseAccountingUpdates RSE countersDetails
rucio-atroposReplicaEnd the life of the rules according to the Lifetime ModelDetails
rucio-auditorReplicaFind inconsistencies on storage, for example, dark data discoveryDetails
rucio-automatixReplicaUsed for testing: injects random data in RSEs to check livenessDetails
rucio-bb8ReplicaRebalance data across RSEsDetails
rucio-cache-clientReplicaPopulates information of replicas on volatile storageDetails
rucio-cache-consumerReplicaAdds and deletes cache replicas to the Rucio catalogDetails
rucio-conveyor-finisherTransferUpdates Rucio internal state after the file transfer has finishedDetails
rucio-conveyor-pollerTransferPolls updates from the transfer tool to check the transfer stateDetails
rucio-conveyor-preparerTransferPrepares data transfersDetails
rucio-conveyor-receiverTransferSister of poller, instead of polling for updates, it reads transfer tools notifications to check transfer stateDetails
rucio-conveyor-stagerTransferIssues staging (bring online) requests to tape RSEsDetails
rucio-conveyor-submitterTransferSubmit transfer requests to the transfer tool (prepares the transfer as well if the conveyor-preparer is not enabled)Details
rucio-conveyor-throttlerTransferQueues transfer requests inside Rucio, applying limits, ex: only one transfer at a time, etc ...Details
rucio-dark-reaperDeletionDeletes quarantined replicasDetails
rucio-dumperConsistencyDumps file lists. The rucio-auditor consumes these dumps to discover dark dataDetails
rucio-followerTelemetryAggregates events affecting DIDsDetails
rucio-hermesTelemetrySends Rucio messages to external services (InfluxDB, OpenSearch, ActiveMQ, ...)Details
rucio-judge-cleanerRuleCleans expired replication rulesDetails
rucio-judge-evaluatorRuleCreates and evaluates replication rules based on their state (OK/REPL/STUCK)Details
rucio-judge-injectorRuleAsynchronously injects replication rulesDetails
rucio-judge-repairerRuleRepairs stuck replication rules (STATE=STUCK)Details
rucio-kronosTelemetryConsumes Rucio tracing messages, updates access time of replicas and access count of DIDsDetails
rucio-minosReplicaReads list of physical file names (PFNs) declared bad and classifies them in: temporary unavailable and permanently unavailable (to be recovered by the necromancer daemon)Details
rucio-minos-temporary-expirationReplicaMoves back TEMPORARY_UNAVAILABLE replicas into AVAILABLE stateDetails
rucio-necromancerDeletionWorks on permanently unavailable replicas, it tries to recover the data from other valid replicas if any, else declares the replica as lostDetails
rucio-oauth-managerAuth/AuthzDeletes expired access tokens (in case there is a valid refresh token, expired access tokens will be kept until refresh_token expires as well.) and deletion of expired OAuth session parametersDetails
rucio-reaperDeletionDeletes replicas that don't have locks anymore, i.e. they have a tombstone setDetails
rucio-replica-recovererReplicaDeclares suspicious replicas as bad if they are found available on other RSEs, so necromancer will work on themDetails
rucio-rse-decommissionerDeletionDecommissions an RSE. The actions to perform are specified in decommissioning profiles (delete all data, move replicas, etc ...)Details
rucio-storage-consistency-actionsConsistencyApplies corrective actions as a result of a consistency check on an RSEDetails
rucio-transmogrifierRuleCreates replication rules for DIDs matching a subscriptionDetails
rucio-undertakerDeletionManages expired DIDs, deleting them (does not delete replicas)Details

FAQ

Conveyor daemons

It is important to know the following:

  • The throttler daemon will need the preparer to work.
  • The preparer is a daemon that optimizes transfer requests, while recommended to install, it's not mandatory.
  • The submitter is the only daemon needed to submit transfers and can do a subset of what the preparer can do.
  • To update the state of requests, the conveyor poller (polls for changes) or conveyor receiver (listens for changes) are needed to understand the new state.
  • The finisher analyzes this new state and updates the state.

What happens when a rule is stuck?

The judge repairer will analyze why the transfer is stuck and try to unstuck it, eventually resubmitting the request.

What happens when new data is added to an existing dataset that already has replicas?

The judge evaluator will keep track of new data added to datasets that are already replicated to trigger the necessary transfer requests to ensure all new data is copied to the RSEs.

What is the purpose of the minos daemons?

An human operator can declare some datasets as temporarily unavailable due to maintenance, outages, etc ...The operator will set an expiration time on the temporary unavailable status. When the expiration time is reached, the minos-temporary-expiration will put the replicas back in available state.

What is the relationship between auditor, rucio-dumper and dark-reaper?

The dumper will create a dump of all the files in an RSE that will be passed to the auditor. The auditor will check for inconsistencies and mark missing data as dark data (quarantined replicas). Dark reaper is the one deleting this dark data to free up space from the quarantined replicas table.

How is data deleted?

When replicas are healthy, the judge-cleaner will set a tombstone on replicas where the lifetime has expired. These replicas are taken by the reaper and they are deleted. Sometimes, replicas can become unhealthy. A dump is created by the dumper daemon. The auditor checks these dumps and declares replicas as suspicious.

How is a replica declared bad?

  • An operator can declare a replica as bad issuing rucio cli commands.
rucio-admin replicas declare-bad [-h] --reason REASON [--inputfile [INPUTFILE]] [--allow-collection] [--lfns [LFNS]] [--scope [SCOPE]] [--rse [RSE]] [listbadfiles ...]

These bad replicas are taken by the necromancer daemon and then deleted if they cannot be recovered from other RSEs.

  • the suspicious-replica-recoverer is a daemon that will analyze different counters (transfer errors, download errors, etc ...) to mark replicas as being suspicious to have an issue. After hitting a certain limit (configurable by --nattempts flag), the replica is marked as bad and eventually consumed by the necromancer.

What is the purpose of undertaker?

A dataset is "never" deleted, however, when the dataset is known to be bad, there is no point having it in the catalog. The undertaker daemon takes care to remove these datasets. An operator will set an expiration date in the past of the DIDs and this daemon will delete the dataset from the DB. If there were any replicas attached, the replicas will be deleted as well.

Daemon arguments

A full description for each daemon's arguments can be found by running rucio-{daemon} --help or viewing the Details in the daemon description table above. Listed below are common definitions between different daemons.

  • run-once - Only run one iteration of the daemon. If executed with this argument the daemon will run once and close.
  • sleep-time - How long a daemon will sleep between iterations, mutually exclusive with run-once. Units of seconds.
  • threads, total-workers, threads-per-process, nprocs - [Present in threaded daemons] Run in threaded mode.
  • bulk, chunk-size, max-rows - [Present in batched daemons] Provide a limit of the number of operations a single instance of a daemon can run in an iteration.
  • dry-run - Run once, showing logs of the daemon's operations without performing any action. Useful for verifying settings of the instance and daemon.
  • vos - [Present in Multi-VO daemons] Provide a list of VOs with which the daemon can interact. Used when VOs use different settings for their daemons.

Batched Daemons

Some daemons can run over large backlogs depending on the traffic of an instance. To prevent the daemon from running too long, or submitting too many requests at once to an external system, a daemon's workload can be batched. Between batches of work, the daemon sleeps for sleep-time.

This setting is present in daemons that either submit requests (e.g. transfer requests) or process multiple replicas or DIDs (e.g. setting statuses, running deletion).

Threaded Daemons

When daemons are run with threaded arguments, the database query used by the daemons has a threads argument applied to the query string. This is applicable to oracle, postgres and mysql databases.

Note: This definition doesn't apply for producer/consumer daemons. When producer/consumer daemons are run in threaded mode, it creates multiple instances of producers and consumers. This includes the threaded conveyor daemons.

Multi-VO Daemons

Daemons that have a vos option can be set to have separate settings per vo running in a multi-vo instance. When this is set, the daemon will only interact with objects that are explicitly included in the specific VO.

By default, a daemon with multi-VO options interact with all VOs on the instance. For example, this is the log displayed by the replica-recoverer daemon.

$ rucio-replica-recoverer --run-once
2020-07-28 15:15:14,151 5461 INFO replica_recoverer: This instance will work on VOs: def, abc, xyz, 123


$ rucio-replica-recoverer --run-once --vos abc xyz
2020-07-28 15:16:36,066 5474 INFO replica_recoverer: This instance will work on VOs: abc, xyz

Note: Multi-VO daemons can still be used in single-VO instances, and the vos option does not need to be set.