I am trying to use enlarge to buffer a dataset of points but it is failing with this error
riskscape pipeline eval pipeline_buffer_points.txt
[WARNING] The 'beta' plugin is enabled. This contains experimental features that may significantly change or be deprecated in future releases.
Problems found with 'pipeline-eval' pipeline
- Execution of your data processing pipeline failed. The reasons for this follow:
- Supplied map '{...}' is missing required keys - [INSP_DATE, FLOOD_FOUNDATION_TYPE, FLOOD_PERIMETER, FLOOD_HEIGHT_CONF, FLOOD_NOT_MEAS_RSN, FLOOD_STOREYS, FLOOD_UNIT_LEVEL, FLOOD_CLADDING_TYPE, FLOOD_COMMENT, FLOOD_FRONT_SILL_HEIGHT, FLOOD_OTHER_SILL_HEIGHT]
my pipeline is
input(relation: 'points', name: 'exposure') as exposures_input
-> enlarge(distance: 10, mode: 'ROUND', remove-overlaps: false)
-> save(name:'buffered')
If I comment out the enlarge step then it runs fine. It also runs fine if I use a trivial dataset of points with no attributes
Hi John,
That error message isn’t particularly helpful, is it? Could you send through the output from the following command?
riskscape pipeline eval pipeline_buffer_points.txt --print
That might give a few more clues as to what’s happening here.
In the meantime, you could also try doing the buffering long-hand, using something like this:
input(relation: 'points', name: 'exposure') as exposures_input
-> select({ merge(exposure, { buffer(exposure.the_geom, 10) as the_geom }) as exposure })
-> save(name:'buffered')
You’d need to replace the_geom
with the actual name of the geometry attribute in your case.
Cheers,
Tim
$ riskscape pipeline eval pipeline_buffer_points.txt --print
[WARNING] The 'beta' plugin is enabled. This contains experimental features that may significantly change or be deprecated in future releases.
Step: exposures_input:[input]
Parameters:
limit : <none-given>
name : exposure
offset : <none-given>
relation : FeatureSourceRelation
value : <none-given>
Produces:
exposure =>
geom => Point[crs=EPSG:2193]
OBJECTID => Integer
EXTRACT_DATE => Date
LETTABLE_UNIT_CODE => Text
FORMATTED_ADDRESS => Text
PROPERTY_TYPE_CODE => Text
PROPERTY_TYPE_DESC => Text
PROPERTY_OWNER_CODE => Text
ASSET_TYPE_DESC => Text
PROPERTY_DESCRIPTION => Text
CURR_TLA => Text
CITY => Text
PROPERTY_LAND_CODE => Text
LAND_STATUS => Text
NUMBER_OF_BEDROOMS_CNT => Integer
BUILD_YEAR => Integer
BLDG_NUMBER_OF_STOREYS => Integer
CLADDING_TYPE => Text
LAND_AREA_SQM => Integer
PROPERTY_FLOOR_AREA_SQM => Floating
Operating_Region => Text
Parcel_ID => Integer
INSP_DATE => Date
FLOOD_FOUNDATION_TYPE => Text
FLOOD_PERIMETER => Text
FLOOD_HEIGHT_CONF => Text
FLOOD_NOT_MEAS_RSN => Text
FLOOD_STOREYS => Integer
FLOOD_UNIT_LEVEL => Text
FLOOD_CLADDING_TYPE => Text
FLOOD_COMMENT => Text
FLOOD_FRONT_SILL_HEIGHT => Floating
FLOOD_OTHER_SILL_HEIGHT => Floating
Final_valuation => Floating
Result: FeatureSourceRelation
Step: save:[save]
Inputs: [exposures_input:[input]]
Parameters:
format : <none-given>
name : buffered
Produces:
Result: SaveSinkConstructor
Okay sweet the long hand way is running now. Thanks
That’s good. The problem may have been that some of those attributes are null in the input data. You could try running something like this:
riskscape pipeline eval "input('points') -> filter(is_null(INSP_DATE))"
Then check the output - if there are features in it, then that means some of the attributes in the input data are null.
If that’s the case, then you can probably fix the enlarge()
problem by using a bookmark type - either by marking the attributes as being ‘nullable’ type, or just ignoring the attributes if you don’t need them in the output.