logging
About 255 wordsLess than 1 minute
2025-06-09
Logger
Currently, the logger is initialized in pipeline_step.py
:
import logging
logging.basicConfig(level=logging.INFO,
format="%(asctime)s | %(filename)-20s- %(module)-20s- %(funcName)-20s- %(lineno)5d - %(name)-10s | %(levelname)8s | Processno %(process)5d - Threadno %(thread)-15d : %(message)s",
datefmt="%Y-%m-%d %H:%M:%S"
)
Usage is as follows. debug
, info
, warning
, and error
represent different log levels. By default, logs at the DEBUG level are not shown.
def main():
logging.debug("This is DEBUG message")
logging.info("This is INFO message")
logging.warning("This is WARNING message")
logging.error("This is ERROR message")
return
main()
Principles for assigning log levels:
DEBUG: Outputs that are not very useful or should be hidden / technical details you don’t want to expose, such as:
for x in ['Text', 'image', 'video']: module_path = "dataflow.Eval." + x try: module_lib = importlib.import_module(module_path) clss = getattr(module_lib, name) self._obj_map[name] = clss return clss except AttributeError as e: logging.debug(f"{str(e)}") continue except Exception as e: raise e
INFO: Used to let users know the current execution status, such as:
def pipeline_step(yaml_path, step_name): import yaml logger = get_logger() logger.info(f"Loading yaml {yaml_path} ......") with open(yaml_path, "r") as f: config = yaml.safe_load(f) config = merge_yaml(config) logger.info(f"Load yaml success, config: {config}") algorithm = get_operator(step_name, config) logger.info("Start running ...") algorithm.run()
WARNING: Error messages indicating potential issues (no examples for now).
ERROR: Errors that occur during execution; used to print error messages.
For logging inside operators, refer to DataFlow/dataflow/operators/generate/Reasoning/question_generator.py
.