Overview

A deployable Samza application currently consists of JARs for Samza infrastructure code (and dependent JARs) and JARs for application-specific code (and dependent JARs). The full deployable package is determined at build time. When deploying an application, the built package of JARs is placed on the necessary node(s), which includes the job coordinator and the processing containers. This build-time packaging has benefits, as it simplifies the deployment responsibilities of Samza infrastructure – the package built by the application has everything needed to run a Samza application. Application owners (who may not be the same as the owners of the Samza infrastructure) choose the version of Samza to use and do the packaging.

One pain point in working under this model involves dependency management. Since applications do the packaging of JARs, it is up to them to do dependency conflict resolution. If application-specific code builds against a dependency of a certain version and Samza infrastructure code builds against that same dependency with a different version, then only one of those versions will actually get used at runtime. This can result in unexpected versions of libraries being used at runtime, causing issues like ClassNotFoundExceptions. There are some parts of Samza infrastructure which are relatively agnostic of application-specific code (e.g. YARN application master), but those can still be impacted by how an application does the packaging of JARs (e.g. what dependencies are included). Samza infrastructure is validated against a certain set of dependencies, but applications can still change the actual runtime dependencies that are used. These issues result in lower availability and the need to spend time on debugging. It is also up to the application to fix the packaging.

It would be helpful to be able to isolate the dependencies of the Samza infrastructure from the dependencies of the application. This SEP covers how to achieve this for the cluster-based job coordinator, which is used when running Samza jobs in resource management systems like YARN.

Terms

TermDescription
cluster-based job coordinatorprocess that is responsible for managing the processing containers of a Samza job (e.g. starting containers, keeping correct # of containers running) when running Samza with a resource management system
YARNa resource management system which can be used to run Samza jobs
application mastera cluster-based job coordinator in the context of YARN
application runnerSamza component which is responsible for launching an application
application (or application-specific)code and dependencies which are specific to a particular Samza application, as opposed to Samza infrastructure
pluggable (or plugin) classclass which is specified by an application through configuration (e.g. system factory, grouper)

Requirements

  • Application dependencies should not be able to impact the Samza cluster-based job coordinator
  • Solution should be leverageable for the Samza logic running on processing containers

Design

New configs

Config keyDescription
samza.cluster.based.job.coordinator.dependency.isolation.enabledSet to "true" to enable cluster-based job coordinator dependency isolation

YARN-specific

These configs are for localizing the framework resources in a YARN environment. If using a different execution environment, then it will be necessary to specify localization configs specific to that environment for the framework API and framework infrastructure resources. Other environments may have a different way for specifying the resource locations.

Config keyDescription
yarn.resources.__samzaFrameworkApi.pathPath to the Samza framework API resource
yarn.resources.__samzaFrameworkApi.*Any other YARN resource configurations for the Samza framework API resource
yarn.resources.__samzaFrameworkInfrastructure.pathPath to the Samza framework infrastructure resource
yarn.resources.__samzaFrameworkInfrastructure.*Any other YARN resource configurations for the Samza framework infrastructure resource

Existing JAR management

Currently, Samza infrastructure code and dependencies are included in the tarball with the Samza application. This means that conflicting dependencies between the application and Samza are resolved at build time before the tarball is created, which can cause a certain version of a dependency to be excluded. All JARs in the tarball are installed into a single directory for classpath generation and execution.

Decoupling job coordinator JARs from application JARs

In order to isolate the job coordinator JARs from the application JARs, we will use multiple classloaders, associated with different classpaths. The JARs needed by the "application" will be in a different classpath than the JARs needed for the infrastructure. The separated classpaths will allow duplicate dependencies to be used within the same JVM. The functionality we build will need to ensure that the correct dependency is used for a given class (e.g. infrastructure dependency for infrastructure class vs. application dependency for application class).

Useful details about classloaders:

  • If a class A is directly defined by a classloader CL (CL is called the "defining loader"), then classloader CL will also be called to load (i.e. call loadClass) any dependencies of class A (even when using reflection). If a classloader CL delegates to another classloader CL1 for actually defining class A, and classloader CL1 actually defines class A, then classloader CL1 will be called to load dependencies of class A.
  • If a class A is directly loaded by classloader CL and class A is also directly loaded by classloader CL1, then an instance corresponding to the first class A cannot be cast to the second class A. This means that all classes (or instances of classes) that are shared "across" classloaders must be loaded from the same classloader. An interface loaded by a common classloader is sufficient to allow sharing of a concrete object, even if the concrete class comes from a different classloader than the interface.

We will leverage the cytodynamics library (https://github.com/linkedin/Cytodynamics) to help manage classloaders. The cytodynamics library provides a way to annotate or whitelist certain classes as "API classes", to be loaded through a parent classpath. The rest will be loaded by a child classpath. The cytodynamics library also provides a way to load classes from the child classpath before checking the parent. These are useful features for ensuring that dependencies are chosen correctly at runtime.

This design involves using three separate classloaders: API, infrastructure, and application. API is associated with the classes that might be implemented or used outside of Samza, such as SamzaApplication or SystemFactory. Infrastructure consists of the core implementation of Samza (e.g. ClusterBasedJobCoordinator) and built-in plugin implementations (e.g. KafkaSystemFactory). Application is the code provided by the application.

"API" classloader

This classloader is responsible for loading the following categories of classes:

  • Basic Samza API interfaces/classes (e.g. StreamApplication, TaskApplication), since Samza processes interact directly with those Samza API classes
  • Base classes which can be used to help build pluggable classes (e.g. BlockingEnvelopeMap, KeyValueStorageEngine), in order to isolate the base logic
  • Utility libraries provided by Samza to help build applications

The classpath that is associated with this classloader will not contain specific implementations of any API interfaces (e.g. KafkaSystemFactory). Those classes will be accessed through the "infrastructure" or "application" classloaders.

This classloader can be a URLClassLoader with the bootstrap classloader as the parent.

Generating the Samza API whitelist

In order to load the Samza API classes from the API classloader, we need to tell cytodynamics what those classes are. We can do this by providing a whitelist of packages/classes when building the cytodynamics classloader. All public interfaces/classes inside of samza-api should be considered an API class. One way to generate this whitelist is to use a Gradle task to find all the classes from samza-api and put that list in a file. Then, that file can be read by Samza when constructing the cytodynamics classloader. The Gradle task should also include classes from samza-kv.

Other than classes that are explicitly provided by Samza as API, there are some other classes which need to be loaded by a common classloader so that they can be shared across classloaders. For some cases, like log4j2, instead of including each specific class name, cytodynamics accepts wildcard entries for the whitelist (e.g. "org.apache.logging.log4j.*").

ClassesDescription
samza-apimain API classes
samza-kvsome classes from here are used by implementations of pluggable classes
org.apache.logging.log4j:log4j-apisee SEP-24: Cluster-based Job Coordinator Dependency Isolation#Logging below for more information
org.apache.logging.log4j:log4j-coresee SEP-24: Cluster-based Job Coordinator Dependency Isolation#Logging below for more information

"Infrastructure" classloader

This classloader is responsible for loading the following categories of classes:

  • Classes which are used directly when starting up a Samza process (e.g. ClusterBasedJobCoordinator)
  • Implementations of plugin classes which are owned by the Samza team (e.g. KafkaSystemFactory)
    • These might not directly come from the Samza project. For example, a custom system implementation can be included here if it is desirable to consider it as "framework code".

This classloader will need to be able to delegate to the other classloaders in some cases.

  • The pluggable classes implement Samza API interfaces (e.g. SystemFactory), and the classes corresponding to those interfaces need to be loaded by the API classloader. Implementations of plugin interfaces can be on both the "infrastructure" and "application" classpaths, and all components need to use interfacs loaded by the same classloader (i.e. API classloader).
  • Object deserialization (e.g. Avro) may be used within "infrastructure plugins" code, but the application must provide the classes for the concrete deserialized objects at runtime, since the application will be using those deserialized objects. For this case, the "infrastructure plugins" classloader will load the infrastructure plugins class, but it will need to delegate to the application classloader for the deserialized object classes.
    • Note that object deserialization is not used on the job coordinator, so it is less of a concern in the scope of this SEP. However, we do need to consider it for applying isolation mode to the processing containers (in a future SEP), so it will be good if the strategy used in job coordinator isolation carries over to the processing containers. 
    • For the Avro case: Since the Avro objects need to be used by the application code, then the application will need to be able to choose the version of Avro. The infrastructure code will delegate to the application classloader for the Avro classes as well, which means that the Avro version chosen by the application does need to be compatible with the Avro version used by the infrastructure.
    • This also applies to other serdes such as SerializableSerde and JsonSerdeV2.

Flow for loading a class from the infrastructure classloader:

  1. If a class is a Samza API class, then load it from the API classloader.
  2. If the class is on the infrastructure classpath, load it from the infrastructure classloader.
  3. If the class is on the application classpath, load it from the application classloader.
  4. ClassNotFoundException

This can be achieved with cytodynamics. The API classloader will be the parent of the infrastructure classloader, using a FULL isolation level and a regex specifying that all Samza API classes are preferred from the API classloader. A FULL isolation level means that a class will be loaded from the parent if the class matches the parent-preferred regex. This achieves Step 1 above. The application classloader will also be a parent of the infrastructure classloader, using a NONE isolation level. A NONE isolation level means that a class will be preferred to be loaded from the child, but the parent will be used as a fallback. This achieves Steps 2-3 above.

An effect of using this ordering is a pluggable class implemented by the application will be used when that class is not provided by the infrastructure plugins.

"Application" classloader

There are also many pluggable classes which are owned by an application owner. In the job coordinator, an example of this would be a custom SystemFactory implementation.

Similarly to the infrastructure classloader, this classloader needs to load Samza API interfaces from the API classloader.

Flow for loading a class from the application classloader:

  1. If a class is a Samza API class, then load it from the API classloader.
  2. If the class is on the application classpath, load it from the application classloader.
  3. ClassNotFoundException

This can be achieved with cytodynamics. The application classloader will be associated with the API classloader as a parent, using a FULL isolation level and a whitelisted list of Samza API classes. This gives us the desired loading.

This structure means that if the application classloader needs a class which is an infrastructure plugin (e.g. custom system factory using KafkaSystemFactory as an "underlying system implementation"), then it will load that class from the application classpath, not the infrastructure classpath. This is reasonable, because the application is providing the implementation of the pluggable class directly, so we will just treat the infrastructure plugin class as a regular library at that point.

The classpath for this classloader will be the package of JARs built by the application.

Handling SamzaApplication.describe

The SamzaApplication.describe method needs to be able to delegate to the framework for certain concrete descriptor components (e.g. system descriptors, table functions). The framework descriptor components will be added as part of the framework API whitelist which will be checked when loading classes in the application classloader, so that the application classloader will delegate to the framework API classloader for framework descriptors. The descriptors are used to generate configs through the descriptor API classes, so concrete framework descriptors and custom descriptors will both work.

Table functions get serialized into configs by the table descriptors that they are contained in. They only need to be deserialized for processing logic, so the job coordinator does not need to deserialize them. On the processing containers, they can get deserialized using the framework infrastructure classloader, so that they can access application classes (e.g. schemas) if necessary. The infrastructure classloader will not delegate to the API classloader for the concrete descriptors.

Flow for loading a class from the application classloader:

  1. If a class is a framework API class, load it from the framework API classloader.
  2. If a class is a framework descriptor class, load it from the framework API classloader.
  3. Load the class from the application classpath.

Classloader wiring

By using the special classloader to instantiate the "main" class, any dependencies will then be loaded using that classloader. Then Java will automatically propagate the special classloader through the rest of Samza. We can modify the "main" method to use reflection to load the "main" class and then trigger the actual Samza startup.

public static void main(String[] args) {
  ClassLoader isolatingClassLoader = buildIsolatingClassLoader();
  Class<?> isolatedClass = Class.forName(MainClass.class.getName(), true, isolatingClassLoader);
  isolatedClass.getDeclaredMethod("doMain").invoke(null);
}

Pros

  • Cytodynamics provides an explicit and granular way to specify if a class should be from the parent classpath (i.e. API)
  • Classloader propagation allows the correct external dependencies to be used, even if infrastructure and the application use different versions of the same dependency
  • Do not need to modify existing Samza API classes
  • Do not need to explicitly wire classloader through Samza

Cons

  • Need to ensure proper specification of Samza API classes
    • Are there any classes that are not owned by Samza but are used as part of the Samza API? (e.g. java.lang)
  • Need to generate separate classpaths for each classloader
  • Multiple classloaders is not obvious, so certain assumptions are invalid (e.g. static variables are not shared across classloaders)
  • Extra dependency for Samza
    • Seems like a very lightweight dependency though

Making the necessary JARs available for running the job coordinator

Packaging the job coordinator JARs

The API and infrastructure classloaders each need a package of JARs which is isolated from the application. Those packages need to be built separately from an application. They need to include the core Samza components (e.g. samza-api, samza-core), and they can contain any pluggable components used across many applications (e.g. samza-kafka). The directory structure of the API and infrastructure packages should be the same as the structure for the application (e.g. scripts in the "bin" directory, libraries in the "lib" directory).

The packaging is left to the group of Samza jobs that are using the same set of job coordinator JARs, as different components may be included by different jobs. There are multiple tools that exist for building the packages (e.g. Gradle, Maven).

An example of packaging will be included in the samza-hello-samza project.

Dependencies

API classloader dependencies

  • (required) samza:samza-api
  • (required) samza:samza-kv: includes KeyValueStorageEngine, which is a base class for StorageEngine
  • (optional; if using samza-log4j2 as infrastructure) log4j2 API/core

Infrastructure classloader dependencies

  • (required) samza:samza-core: job coordinator code, default groupers
  • (required) samza:samza-shell (launch scripts)
  • (optional; if using samza-log4j2 as infrastructure) samza:samza-log4j2
  • (optional; if using samza-kafka as infrastructure) samza:samza-kafka: Kafka checkpoint mananger implementation
  • (optional; if using samza-kv-rocksdb as infrastructure) samza:samza-kv-rocksdb: RocksDB storage engine
  • (optional; if using samza-yarn as infrastructure) samza:samza-yarn: YARN resource manager factory
  • Other Samza modules or custom modules can be included in here if they want to be considered as infrastructure.

Localizing the job coordinator JARs

When making a request to YARN, clients are allowed to pass a map of resources to localize on the container. Currently, the "yarn.package.path" config is used to localize the application package, and this includes the Samza infrastructure code. Applications will need to add framework resources using "yarn.resources.*.path" configs.

  1. Continue to use "yarn.package.path" for the application package.
  2. Set "yarn.resources.__samzaFrameworkApi.path" to the path for the API package.
  3. Set "yarn.resources.__samzaFrameworkInfrastructure.path" to the path for the infrastructure package.

Samza will look in specific locations on the file system for the JARs for setting up the classpaths for the different classloaders. The framework API classpath will come from "${user.dir}/__samzaFrameworkApi", the framework infrastructure classpath will come from "${user.dir}/__samzaFrameworkInfrastructure", and the application classpath will come from "${user.dir}/__package". When using the above 3 configs, YARN will place the resources into the desired locations.

In non-YARN execution environments, the "yarn" localization configurations won't apply. Other environments will have their own localization flows. If those other environments are unable to localize the resources into the desired file locations, then we can add a generic way (e.g. configuration or environment variables) to specify the file locations to get the classpath resources. The file location variables would apply to any Samza job; only the environment-specific localization flows would be different.

Generating classpaths for the JARs

Before this design, an application just had a single classpath, so we could specify that classpath through the "classpath" option for the "java" command.

This design introduces multiple classloaders, and each one has its own classpath. The cytodynamics library accepts a classpath for building a classloader. Therefore, we will need a way to generate the classpath for each separate classloader to be made accessible to the Java process. We will have different directories for each category of JARs (i.e. API, infrastructure, application), so the classpath for a certain classloader can consist of a list of all JARs in the corresponding directory.

The current working directory can be obtained from System.getProperty("user.dir"), and we can find the separate JAR directories from there in code. We can also generate the classpaths in code by finding all of the JAR files in a given directory.

Pros

  • Easier to localize Samza infrastructure on its own, since it is separate from applications
  • Evolves well into general split deployment, since can just localize different Samza packages to do an upgrade
  • Leverages existing flow for localizing JARs
  • Samza infrastructure can define the full runtime package of JARs (including dependencies) at build time

Cons

  • Need to ensure that framework packages has consistent versions with the version of Samza used within the application
  • Need to localize artifacts to multiple places
  • Not all jobs use all infrastructure plugins, so this would localize more JARs than necessary for each job

Logging

Slf4j and log4j do certain things to help make it easy to use it in a single classloader case. For example, it uses static contexts to be able to aggregate logging across multiple classes. With multiple classloaders, we have to be more careful. For example, static contexts are not shareable if they get loaded by different classloaders.

  1. Set the context classloader to be the infrastructure classloader
  2. The framework packaging needs to have a certain set-up. The following steps are for supporting log4j2 as the logging implementation for slf4j. It should be possible to support other logging implementations by adding the correct dependencies and specifying the correct classes on the API whitelist.
    1. Include log4j2 dependencies in the framework API package (org.apache.logging.log4j:log4j-api, org.apache.logging.log4j:log4j-core, org.apache.logging.log4j:log4j-slf4j-impl, org.apache.logging.log4j:log4j-1.2-api).
    2. Add the classes from log4j-api and log4j-core to the API whitelist. This can be done by just adding "org.apache.logging.log4j.*" to the whitelist.
    3. Include samza-log4j2 as a dependency for the framework infrastructure package.
    4. Include log4j2 dependencies in the framework infrastructure package. These should already be included transitively through samza-log4j2.
    5. Exclude all log4j v1 dependencies from all classpaths (org.slf4j:slf4j-log4j12, log4j:log4j).
    6. (Recommended) Add a default log4j2.xml configuration file if there are cases in which the application does not provide one.
  3. When setting the system property for the log4j2 configuration file location ("log4j.configurationFile"), the application's log4j2.xml should be used if it exists. Otherwise, a default log4j2.xml configuration from the framework infrastructure can be used. This can be done by passing an extra environment variable which is the "application lib directory" which may contain the application's log4j2.xml file to the job coordinator execution, and then reading that environment variable in the run-class.sh script when setting the log4j configuration system property.

For more context about why these changes are needed, see SEP-24: Cluster-based Job Coordinator Dependency Isolation#Details around necessary changes for logging.

Pros

  • Able to isolate log4j2 pluggable components built by Samza
  • Can override Samza infrastructure logging configuration

Cons

  • Samza ends up controlling log4j2 API version
  • No support for isolation for log4j1 pluggable components, so existing apps would need to migrate to log4j2 to get isolation

External context

The ExternalContext is currently only used on processing containers, so there should be no conflict between Samza infrastructure and application on the job coordinator. Therefore, we don't need to do anything for isolation for ExternalContext usage in the job coordinator.

Beam

No Beam-specific code runs on the application master, so we do not need to make additional changes for that part.

SQL

Currently, Samza SQL applications just consist of SQL statements (i.e. text in a file).

The functionality provided by this document should not currently be leveraged by Samza SQL, due to the non-existence of application JARs. We still need to ensure that the new functionality does not break the existing Samza SQL functionality. One area to watch out for is that Samza SQL currently uses the SQL framework code as the main classpath, so that should not break.

In the future, UDFs should be able to be specified by applications. We should be able to leverage the separate classloader solution for this. Also, it is possible in the future that the job coordinator will need to run SQL-specific code. This would likely be a pluggable component, so we should be able to handle that by including it on the Samza infrastructure classpath.

Backward Compatibility

If this feature is off, this is backwards compatible, because we will use the old single-classpath model.

If this feature is on, then there is some potential runtime impact: Previously, the application packaged all Samza code and determined the dependencies, and that was what was used for the application runner, job coordinator, and processing containers. This meant that all runtime code was consistent across the Samza processes. With isolation, there may be an inconsistency between Samza and its dependencies used in the job coordinator when compared to the runner and processing containers. If there is any flow which requires the same set of dependencies to be used across all 3 pieces, then there would be a problem. An example of an issue could be if Java serialization is used to serialize a class on the application runner, and then it is deserialized on the job coordinator, where the version of the class is different than the version on the runner.

Samza serializes data into strings to pass them between processes. There are certain categories of data that are serialized into strings:

  • Plain strings (e.g. configs): Normal strings should be compatible across versions
  • JSON (e.g. checkpoints): JSON has reasonable compatibility concepts built-in, but they need to be considered when the data models are changed
  • Serializable objects (e.g. serdes in configs): Need to follow https://docs.oracle.com/javase/8/docs/platform/serialization/spec/version.html when changing Serializable objects

Within Samza, the data categories can be controlled and compatibility rules can be followed.

However, it is difficult to strictly control compatibility across versions of dependencies. It is possible that a certain dependency version serializes some data, but then a different process is unable to deserialize it, because it is using a different version of the dependency. Practically, it is not expected that this will be a common issue, since dependencies should generally be relatively consistent and it is uncommon to use third-party serialization, but it is still possible.

Once we have general split deployment, this will no longer be a problem, because the version of Samza used across all parts of the application will be consistent.

Testing

Local testing

We can use samza-hello-samza to test this locally. It has scripts to set up Zookeeper, Kafka, and YARN locally. The local YARN deployment will give the process isolation necessary to test the AM.

  1. Locally build the framework tarballs for API and infrastructure. It would be useful to put an example somewhere for how to build those tarballs.
  2. Deploy Zookeeper, Kafka, and YARN locally (https://samza.apache.org/startup/hello-samza/latest/).
  3. Fill in certain configs (see SEP-24: Cluster-based Job Coordinator Dependency Isolation#New configs above). These will go into the properties file passed to the run-app.sh script.
  4. Create the tarball for the application (https://samza.apache.org/startup/hello-samza/latest/). For testing local changes, remember to run the "publishToMavenLocal" command.

Changes can also be committed to samza-hello-samza to automatically execute the steps above.

Automated integration test

We could also consider writing an integration test using the integration test framework (which uses real YARN).

Full YARN cluster testing

It will also be useful to deploy some test jobs into a full YARN cluster with multiple nodes in order to verify the functionality.

Alternative solutions

Alternative solutions for SEP-24

Appendix

Details around necessary changes for logging

  1. Log4j uses the context classloader when loading most of the classes it needs to do logging. Setting the context classloader as the infrastructure classloader allows logging to be routed back to the infrastructure classloader when logging is called by any part of the Samza job (including application code).
  2. Framework packaging
    1. log4j2 dependencies need to be on the API classloader since API code does logging through slf4j, and slf4j needs a logging implementation
    2. log4j-api and log4j-core classes need to be in the API whitelist because the application may implement some pluggable components and there are a few classes that are used to implement the slf4j binding which aren't loaded by the context classloader (but they still need to be loaded by a common classloader)
      1. log4j-api is included in the API whitelist so the log4j2 concrete classes which implement log4j-api classes (e.g. LoggerContextFactory) and are loaded by the context classloader would be compatible with the application layer
      2. log4j-core is included in the API whitelist since there are some log4j2 pluggable classes which implement log4j-core interfaces (e.g. Appender) and come from the application classloader, and those need to be compatible with the infrastructure layer. Some of the pluggable classes that are included in log4j-core (e.g. RollingFileAppender) will end up getting loaded from the API classloader.
      3. We should not add the slf4j API nor any slf4j binding to the parent-preferred whitelist for the API classloader. If the application does not want to use the logging framework that is used by API/infrastructure, then that should be allowed. This does mean that the application will always need to include an slf4j binding on its classpath if it is using slf4j, even if it is the slf4j to log4j2 binding. If the slf4j to log4j2 binding is included by the application, then it will delegate to the API classloader for log4j-api classes.
    3. samza-log4j2 includes the Samza framework pluggable components (e.g. StreamAppender)
    4. log4j2 dependencies need to be on the infrastructure classloader since infrastructure code does logging through slf4j, and slf4j needs a logging implementation
      1. log4j-api and log4j-core classes will end up getting loaded from the API classloader, so it's not necessary to include it, but it will be transitively pulled in and it is not necessary to exclude it
    5. The "log4j:log4j" (main log4j implementation) dependency conflicts with "org.apache.logging.log4j:log4j-1.2-api" (log4j1 to log4j2 bridge), and "org.slf4j:slf4j-log4j12" (log4j1 binding for slf4j) conflicts with "org.apache.logging.log4j:log4j-slf4j-impl" (log4j2 binding for slf4j).
  3. If the application provides a log4j2.xml configuration file, then we should use that. Otherwise, we can fall back to a default configuration specified by the framework.

Notes regarding logging in a multiple-classloader scenario

  • Log4j searches for a configuration file specified by the "log4j.configuration" system property (or "log4j.configurationFile" for log4j2). If that property is not specified, then log4j will try to find a log4j.xml (or log4j2.xml for log4j2) file on the classpath. Note that log4j2 will look also for a log4j2.xml if the file specified at "log4j.configurationFile" is not found. See LogManager for the log4j implementation and ConfigurationFactory for the log4j2 implementation.
    • Samza does specify the "log4j.configuration" property in run-class.sh.
    • If the "log4j.configuration" system property is an accessible file, then all classloaders will be able to load it.
    • The log4j.xml file will only be searched for through the current classloader.
  • When initializing a class that has a static slf4j Logger field, the LoggerFactory and some core log4j components/interfaces will be loaded from the "current" classloader. However, some pluggable log4j components, (e.g. Appender) will be loaded by the Thread.getContextClassLoader and then passed back to the "current" classloader. If the context classloader loads core log4j components separately from the "current" classloader, then the appenders can't be shared, since the Appender interface would need to come from the same classloader.
    • A config "log4j.ignoreTCL" does exist to ignore the context classloader. Log4j will fall back to using the current classloader if the context class loader is not found or ignored (see org.apache.log4j.helpers.Loader). Samza doesn't currently set the context class loader, although it might be possible that the context class loader gets set by some system using Samza.
  • We should not instantiate multiple instances of RollingFileAppender which write to the same file at the same time due to concurrency issues. Usually, this isn't something to worry about since logging is initialized statically, but when there are multiple classloaders, it is possible to instantiate multiple appenders at the same time.
    • Some log appender implementations could work concurrently. For example, StreamAppender should work as long as the system is able to handle concurrent logging events.
  • Log4j2 does some special resource loading involving looking at the parent classloader of the context classloader (see ProviderUtil), so we need to be careful if log4j-core is on both the API and infrastructure classpaths, since it might lead to using the same class from both classloaders.
    • This can lead to error logs of the form "Unrecognized format specifier" and "Unrecognized conversion specifier", since plugins get loaded from one classloader and get sent to the other.
  • If a context classloader is set, then all log4j2 plugins are loaded from that classloader. Otherwise, it will load from the "current classloader".
  • No labels

2 Comments

  1. Any test plan guidance for this SEP?

    1. Good catch. Added a new section.