collective.transmute.steps#
collective.transmute.steps.basic_metadata#
Pipeline steps for basic metadata normalization in collective.transmute.
This module provides async generator functions for cleaning and setting metadata fields such as title and description. These steps are used in the transformation pipeline and are documented for Sphinx autodoc.
- async collective.transmute.steps.basic_metadata.process_title(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Ensure the
titlefield is set for an item, using itsfilenameoridif it's missing.- Parameters:
item (PloneItem) -- The item to process.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem -- The updated item with a guaranteed title field.
Example
>>> async for result in process_title(item, state, settings): ... print(result['title'])
- async collective.transmute.steps.basic_metadata.process_title_description(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Strip whitespace from the
titleanddescriptionfields of an item.- Parameters:
item (PloneItem) -- The item to process.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem -- The updated item with stripped
titleanddescription.
Example
>>> async for result in process_title_description(item, state, settings): ... print(result['title'])
collective.transmute.steps.blobs#
Pipeline steps for handling blob fields in collective.transmute.
This module provides async generator functions for extracting and processing blob
fields (such as files and images) from items in the transformation pipeline. These
steps are used by collective.transmute.
- async collective.transmute.steps.blobs.process_blobs(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Extract and process blob fields (file, image) from an item.
- Parameters:
item (PloneItem) -- The item to process.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem -- The updated item with extracted blob files in '_blob_files_'.
Example
>>> async for result in process_blobs(item, state, settings): ... print(result['_blob_files_'])
collective.transmute.steps.blocks#
Pipeline steps for handling Volto blocks in collective.transmute.
This module provides functions and async generator steps for processing, normalizing, and generating Volto blocks for Plone items in the transformation pipeline. These steps handle block layouts for collections, folders, and other types, and support block variation and customization.
- collective.transmute.steps.blocks._blocks_collection(item: PloneItem, blocks: list[VoltoBlock]) list[VoltoBlock][source]#
Add a listing block to a collection or topic item.
- collective.transmute.steps.blocks._blocks_folder(item: PloneItem, blocks: list[VoltoBlock]) list[VoltoBlock][source]#
Add a listing block to a folder item, using possible variations.
- collective.transmute.steps.blocks._get_default_blocks(type_info: dict, has_image: bool, has_description: bool) list[VoltoBlock][source]#
Get the default blocks for an item type, filtering by image and description presence.
- collective.transmute.steps.blocks._possible_variations() dict[str, str][source]#
Return a dictionary of possible variations for block layouts.
- async collective.transmute.steps.blocks.process_blocks(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Process and generate Volto blocks for an item, updating its block layout.
- Parameters:
item (PloneItem) -- The item to process.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem -- The updated item with Volto blocks and blocks_layout.
Example
>>> async for result in process_blocks(item, state, settings): ... print(result['blocks'])
collective.transmute.steps.constraints#
Pipeline steps for handling constraints in collective.transmute.
This module provides async generator functions for processing and normalizing
constraints on Plone items in the transformation pipeline. These steps fix and
update exportimport constraints using portal type mappings.
- async collective.transmute.steps.constraints.process_constraints(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Fix and normalize
exportimportconstraints for a Plone item.- Parameters:
item (PloneItem) -- The item to process.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem -- The updated item with normalized constraints.
Example
>>> async for result in process_constraints(item, state, settings): ... print(result['exportimport.constrains'])
collective.transmute.steps.creators#
Pipeline steps for handling creators in collective.transmute.
This module provides async generator functions for processing and normalizing creator fields on Plone items in the transformation pipeline. These steps update and filter creators based on configuration settings.
- async collective.transmute.steps.creators.process_creators(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Process and filter the list of creators for an item.
Configuration should be added to
transmute.toml, for example:[principals] default = 'Plone' remove = ['admin']
- Parameters:
item (PloneItem) -- The item to process.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem -- The updated item with filtered creators.
Example
>>> async for result in process_creators(item, state, settings): ... print(result['creators'])
collective.transmute.steps.data_override#
Pipeline steps for handling data overrides in collective.transmute.
This module provides async generator functions for overwriting item data fields
based on configuration settings in the transformation pipeline. These steps allow
customization of item fields using the data_override section in transmute.toml.
- async collective.transmute.steps.data_override.process_data_override(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Overwrite item data (by
@id) with information from settings.Configuration should be added to
transmute.toml, for example:[data_override] "/campus/areia/noticias" = { "title" = "Notícias" } "/campus/areia/home" = { "exclude_from_nav" = true, "review_state" = "private" }
- Parameters:
item (PloneItem) -- The item to process.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem -- The updated item with overridden data fields.
Example
>>> async for result in process_data_override(item, state, settings): ... print(result['title'])
collective.transmute.steps.dates#
Pipeline step for filtering items by date in collective.transmute.
This module provides async generator functions for filtering Plone items based on date fields in the transformation pipeline. Items older than configured dates are dropped from the pipeline.
- collective.transmute.steps.dates._date_filters_from_settings() tuple[tuple[str, str], ...][source]#
Get date filters from settings.
Example
>>> filters = _date_filters_from_settings() >>> # Returns: (('created', '2020-01-01'), ('modified', '2019-01-01'))
- async collective.transmute.steps.dates.filter_by_date(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Pipeline step to filter items by date fields.
Drops items that have date field values older than configured thresholds. If any configured date field is older than its threshold, the item is dropped.
Configuration should be added to
transmute.toml, for example:`toml [steps.date_filter] "created" = "2000-01-01T00:00:00" `- Parameters:
item (PloneItem) -- The Plone item to process.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem | None -- The item if it passes date filters, None if dropped.
Example
>>> async for result in filter_by_date(item, state, settings): ... if result: ... print(f"Item {item['@id']} passed date filter") ... else: ... print("Item was dropped due to old date")
collective.transmute.steps.default_page#
Pipeline steps for handling default pages in collective.transmute.
This module provides async generator functions for processing and merging default page items in the transformation pipeline. These steps use metadata and settings to merge parent item data into default pages and update relations.
- async collective.transmute.steps.default_page.process_default_page(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Process and merge default page items using metadata and settings.
- Parameters:
item (PloneItem) -- The item to process.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem or None -- The updated item, merged with parent data if applicable, or
Noneif waiting for a parent.
Example
>>> async for result in process_default_page(item, state, settings): ... print(result)
collective.transmute.steps.ids#
Pipeline steps for handling and normalizing IDs in collective.transmute.
This module provides async generator functions and helpers for cleaning up, fixing, and transforming item IDs and paths in the transformation pipeline. These steps support export prefix removal, path cleanup, and short ID normalization.
- collective.transmute.steps.ids.fix_short_id(id_: str) str[source]#
Normalize a short ID by removing spaces and special characters.
- Parameters:
id (str) -- The ID string to normalize.
- Returns:
The normalized ID string.
- Return type:
Example
>>> fix_short_id(' my id ') 'my_id'
- async collective.transmute.steps.ids.process_export_prefix(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Remove export prefixes from the
@idfield of an item.- Parameters:
item (PloneItem) -- The item to process.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem -- The updated item with export prefix removed from
@id.
Example
>>> async for result in process_export_prefix(item, state, settings): ... print(result['@id'])
- async collective.transmute.steps.ids.process_ids(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Normalize and clean up the
@idandidfields of an item.- Parameters:
item (PloneItem) -- The item to process.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem -- The updated item with cleaned up IDs.
Example
>>> async for result in process_ids(item, state, settings): ... print(result['@id'], result['id'])
collective.transmute.steps.image#
Pipeline steps for handling image conversion in collective.transmute.
This module provides functions and async generator steps for converting image fields
into preview image links and managing image relations in the transformation pipeline.
These steps are used by collective.transmute for content types requiring
image conversion.
- collective.transmute.steps.image.get_conversion_types(settings: TransmuteSettings) tuple[str, ...][source]#
Get content types that require
imagetopreview_image_linkconversion.- Parameters:
settings (TransmuteSettings) -- The transmute settings object.
- Returns:
Tuple of content type strings.
- Return type:
Example
>>> get_conversion_types(settings) ('News Item', 'Document')
- async collective.transmute.steps.image.process_image_to_preview_image_link(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Convert
imagefield topreview_image_linkand manage image relations for an item.- Parameters:
item (PloneItem) -- The item to process.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem -- The new image item (if created) and the updated original item.
Example
>>> async for res in process_image_to_preview_image_link(item, state, settings): ... print(res)
collective.transmute.steps.paths#
Pipeline steps for handling path filtering in collective.transmute.
This module provides functions and async generator steps for filtering and validating item paths in the transformation pipeline. These steps use settings to determine which paths are allowed or dropped during processing.
- collective.transmute.steps.paths._is_valid_path(path: str, allowed: set[str], drop: set[str], dropped_by_path_prefix: dict) bool[source]#
Check if a path is allowed to be processed based on allowed and drop prefixes.
- Parameters:
- Returns:
True if the path is allowed, False otherwise.
- Return type:
Example
>>> _is_valid_path('/foo/bar', {'/foo'}, {'/foo/bar'}) False
- async collective.transmute.steps.paths.process_paths(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Filter items based on path settings, yielding only allowed items.
- Parameters:
item (PloneItem) -- The item to process.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem or None -- The item if allowed, or None if dropped.
Example
>>> async for result in process_paths(item, state, settings): ... print(result)
collective.transmute.steps.portal_type#
Pipeline step for processing and mapping Plone item portal types.
This module provides functions to pre-process items and map their portal types
according to pipeline settings. Used in the collective.transmute pipeline.
Example
>>> async for result in process_type(item, state, settings):
... print(result)
- async collective.transmute.steps.portal_type._pre_process(item: PloneItem, settings: TransmuteSettings, state: PipelineState) AsyncGenerator[PloneItem | None][source]#
Pre-process a Plone item using a type-specific processor.
- Parameters:
item (PloneItem) -- The item to process.
settings (TransmuteSettings) -- The transmute settings object.
state (PipelineState) -- The pipeline state object.
- Yields:
PloneItem -- The processed item.
Example
>>> async for processed in _pre_process(item, settings, state): ... print(processed)
- async collective.transmute.steps.portal_type.process_type(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Pipeline step to map and update the portal type of a Plone item.
Uses type and path mappings from settings to update the item's portal type. Yields None if the item should be dropped.
- Parameters:
item (PloneItem) -- The item to process.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem | None -- The processed item or None if dropped.
Example
>>> async for result in process_type(item, state, settings): ... print(result)
collective.transmute.steps.portal_type.collection#
Type processor for items with the Collection portal type.
This processor is called by the portal_type pipeline step to handle
items of type Collection. It cleans up the query field and schedules
post-processing if needed.
Example
>>> async for result in processor(item, state):
... print(result)
- async collective.transmute.steps.portal_type.collection.processor(item: PloneItem, state: PipelineState) AsyncGenerator[PloneItem | None][source]#
Type processor for items with the
Collectionportal type.Cleans up the 'query' field and schedules post-processing if needed.
- Parameters:
item (PloneItem) -- The
Collectionitem to process.state (PipelineState) -- The pipeline state object.
- Yields:
PloneItem -- The processed
Collectionitem.
Example
>>> async for result in processor(item, state): ... print(result)
collective.transmute.steps.portal_type.default#
Default type processor used by the portal_type pipeline step.
This processor yields the item unchanged as the default behavior.
Example
>>> async for result in processor(item, state):
... print(result)
- async collective.transmute.steps.portal_type.default.processor(item: PloneItem, state: PipelineState) AsyncGenerator[PloneItem | None][source]#
Default type processor used by the
portal_typepipeline step.- Parameters:
item (PloneItem) -- The item to process.
state (PipelineState) -- The pipeline state object.
- Yields:
PloneItem -- The unchanged item.
Example
>>> async for result in processor(item, state): ... print(result)
collective.transmute.steps.post_querystring#
Pipeline steps for post-processing querystrings in collective.transmute.
This module provides async generator functions for updating and normalizing querystring definitions in collection-like objects and listing blocks during the transformation pipeline. These steps use state information to resolve and update querystring paths and values.
- async collective.transmute.steps.post_querystring.process_querystring(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Post-process the querystring of a collection-like object or listing block.
- Parameters:
item (PloneItem) -- The item to process.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem -- The updated item with post-processed querystring(s).
Example
>>> async for result in process_querystring(item, state, settings): ... print(result['query'])
collective.transmute.steps.review_state#
Pipeline step for processing Plone item review states.
This module provides functions to filter items based on their workflow review
state and to rewrite workflow history as needed. Used in the collective.transmute
pipeline.
Example
>>> async for result in process_review_state(item, state, settings):
... print(result)
- collective.transmute.steps.review_state._is_valid_state(state_filter: tuple[str, ...], review_state: str) bool[source]#
Check if a review state is allowed to be processed.
- Parameters:
- Returns:
True if review_state is allowed, False otherwise.
- Return type:
Example
>>> _is_valid_state(("published", "private"), "published") True >>> _is_valid_state(("published",), "private") False
- async collective.transmute.steps.review_state.process_review_state(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Pipeline step to process the review state of a Plone item.
If the item's review state is not in the allowed filter, yields
None. Otherwise, rewrites workflow history and yields the updated item.- Parameters:
item (PloneItem) -- The Plone item to process.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem | None -- The processed item or None if filtered out.
Example
>>> async for result in process_review_state(item, state, settings): ... print(result)
collective.transmute.steps.sanitize#
Pipeline step for sanitizing Plone items by dropping unwanted keys.
This module provides functions to remove specified keys from Plone items,
including block-related keys if present. Used in the collective.transmute pipeline.
Example
>>> async for result in process_cleanup(item, state, settings):
... print(result)
- collective.transmute.steps.sanitize.get_drop_keys(has_blocks: bool, settings: TransmuteSettings) set[str][source]#
Get the set of keys to drop from a Plone item during sanitization.
- Parameters:
has_blocks (bool) -- Whether the item contains blocks.
settings (TransmuteSettings) -- The transmute settings object.
- Returns:
The set of keys to drop.
- Return type:
Example
>>> get_drop_keys(True, settings) {'title', 'description', 'blocks'}
- async collective.transmute.steps.sanitize.process_cleanup(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Pipeline step to sanitize a Plone item by dropping unwanted keys.
Removes keys specified in
settings.sanitize['drop_keys']and, if blocks are present,also removessettings.sanitize['block_keys'].- Parameters:
item (PloneItem) -- The Plone item to sanitize.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem -- The sanitized item.
Example
>>> async for result in process_cleanup(item, state, settings): ... print(result)
collective.transmute.steps.uids#
Pipeline step to drop items based on their UID.
- async collective.transmute.steps.uids.drop_item_by_uid(item: PloneItem, state: PipelineState, settings: TransmuteSettings) AsyncGenerator[PloneItem | None][source]#
Drop items based on their UID.
- Parameters:
item (PloneItem) -- The item to process.
state (PipelineState) -- The pipeline state object.
settings (TransmuteSettings) -- The transmute settings object.
- Yields:
PloneItem or None -- The item if not dropped, or
Noneif dropped.
Example
>>> async for result in drop_item_by_uid(item, state, settings): ... print(result) if result else print("Dropped")