This is the multi-page printable view of this section. Click here to print.
Welcome to StatsWales
- Project Reports
- Sprint 35 - Jackalope
- Sprint 34-mid - Ibex
- Sprint 34 - Ibex
- Sprint 33-mid - Horse
- Sprint 33 - Horse
- Sprint 32-mid - Gecko
- Sprint 32 - Gecko
- Sprint 31 Mid - Flying Squirrel
- Sprint 31 - Flying Squirrel
- Sprint 30 Mid - Echidna
- Sprint 30 - Echidna
- Sprint 29 Mid - Donkey
- Sprint 29 - Donkey
- Sprint 28 - Capybara
- Sprint 28 - Capybara
- Sprint 27-mid - Bat
- Sprint 27 - Bat
- Sprint 26-mid - Aardvark
- Sprint 26 - Aardvark
- Sprint 25-mid - the Z hypothesis
- Sprint 25 - Z-test
- Sprint 24 - Y-intercept
- Sprint 23- the letter X
- Sprint 24-mid - the Y intercept
- Sprint 23 - X-axis
- Sprint 22-mid Weighted Average
- Sprint 22 - Weighted Average
- Sprint 21-mid Venn diagram
- Sprint 21 - Venn diagram
- Sprint 20-mid U-Statistic
- Sprint 20 - U-statistic
- Sprint 19-mid "Total"
- Sprint 19 - Total
- Sprint 18-mid Standard Deviation
- Sprint 18 - Standard deviation
- Sprint 17-mid Regression
- Sprint 17 - Regression
- Sprint 16 - Quartile
- Sprint 15 p-value
- Sprint 15 p-value
- Sprint 16-mid Quartile
- Sprint 14 (mid sprint) Outlier
- Sprint 14 Outlier
- Sprint 12 (mid sprint) mean
- Sprint 13 (mid sprint) The Letter N
- Sprint 12 Mean
- Sprint 13 The letter n
- Sprint 11 (mid sprint) longtitudinal study
- Sprint 11 longtitudinal study
- Sprint 10 (Mid) Kruskall-Wallace test
- Sprint 10 Kruskall-Wallace test
- Sprint 9 Mid - Joint Probability Distribution
- Sprint 9 - Joint probability distribution
- Sprint 8 Mid - Intersecting Iguanas
- Sprint 8 - Intersecting Iguanas
- Sprint 7 - Helpful Histogram (mid-sprint)
- Sprint 7 - Helpful Histogram
- Sprint 6 - Giddy Gaussian (mid-sprint)
- Sprint 6 - Giddy Gaussian
- Sprint 5 - Fabulous F-Value (mid-sprint)
- Sprint 5 - Fabulous F-Value
- Sprint 4 - Educated Estimation (mid-sprint)
- Sprint 4 - Educated Estimate
- Sprint 3 - Descriptive Statistics (mid-sprint)
- Sprint 3 - Descriptive Statistics
- Sprint 2 - Curious Correlation (mid Sprint)
- Sprint 2 - Curious Correlation
- Sprint 1 - Buoyant Boolean (mid Sprint)
- Sprint 1 - Buoyant Boolean
- Sprint 0 - Arbitrary Asymptote
Project Reports
Sprint 35 - Jackalope
Sprint 35 - Jackalope
What we did last week
- feature:Sticky first row
- task: Run unmoderated accessibility testing with consumers
- task: Analyse round 5 of consumer user testing
- task: Load testing of dev environment -fixes and performance improvements
- task: Task: Understand what we need to do to meet point 1 of the Welsh service standard: “Focus on the current and future wellbeing of people in Wales”
- fix: Filter fixes - JS, Non-JS and hierarchies
What we’re planning to do this week
- feature: Download in JSON format
- feature: Implement page for filtered data download
- task: Plan service handover to WG
- task: Analyse unmoderated accessibility testing with consumers results
- task: Run summative round of consumer end-to-end testing
- task: Implement WAF ahead of ITHC
- task: Plan summative round of end-to-end user testing
- task: Explore designs for showing custom data value notes in the consumer view
- task: Give devs access to to pre-prod environment
- task: Create a pre-prod environment
- fix: Overlap styling issue on rows per page select box
- fix: Explore why two datasets failed to rebuild
- fix: Review logs from manual load test - identify improvements
Goals
These are the goals that we set for this sprint:
- Prepare and support for ITHC In progress
- Address fixes in update journey In progress
- Download metadata In progress
- Prepare for support and service handover In progress
Risk and Issues
Current table showing project Risks and Issues:
Show and Tell from last week
Sprint 34-mid - Ibex
Sprint 34-mid - Ibex
What we did last week
- feature: Consumer navigation to API documentation
- feature: Mandate notes column in dataset creation and validate this
- feature: Translate to Welsh for consumer testing
- feature: Implement history tab on dataset overview (publisher)
- feature: Implement find a dataset by browsing taxonomy
- feature: Move return to tasklist link
- feature: Filters first iteration: Non Javascript version
- feature: Hierarchies in filters
- feature: Tabs: “Data” and “About this dataset”
- feature: Filters first iteration: Javascript version
- feature: Consumer data table (basic first version)
- feature: View API guidance
- feature: Access datasets via API
- feature: Validate the languages in lookup tables
- feature: Approve a dataset
- task: Load testing of dev environment -fixes and performance improvements
- task: Service performance measurement framework
- task: Run end-to-end UI consumer testing without sorting or filtered downloads
- task: Proxy API requests through the frontend
- task: Manual accessibility testing
- task: Prepare for beta assessment
- task: Plan unmoderated end-to-end diary study
- task: Explore introductory journey / consumer guidance to StatsWales
- task: Plan design handover
- task: Prepare discussion guide and testing materials for end-to-end UI consumer testing
- task: Hold Beta service assessment
- task: Navigate hierarchies in pivot tables
- fix: Change ‘Missing data’ for ‘x’ shorthand to ‘Not available’
What we’re planning to do this week
- task: Analyse round 5 of consumer user testing
- task: Export whole translation file for non-guidance screens
- task: Run unmoderated accessibility testing with consumers
- task: Implement Anti-Virus scanning ahead of ITHC
- task: Implement WAF ahead of ITHC
- task: Plan summative round of end-to-end user testing
- task: Explore designs for showing custom data value notes in the consumer view
- task: Give devs access to to pre-prod environment
- task: Create a pre-prod environment
- fix: Two datasets failed to rebuild
- fix: Review logs from manual load test - identify improvements
Goals
These are the goals that we set for this sprint:
- Prepare the environment for the ITHC In progress
- Sorting and Filtered downloads In progress
- Preview release and support for comms for publishers Done
- Attend Beta service assessment Done
Risk and Issues
Current table showing project Risks and Issues:
This chart shows the change in total risk impact over time
Sprint 34 - Ibex
Sprint 34 - Ibex
What we did last week
- feature: Write cookies statement
- task: Run end-to-end UI consumer testing without sorting or filtered downloads
- task: Plan unmoderated accessibility testing with WG staff networks
- task: Prepare Marvell environment for consumer testing
- task: Onboard KAS admin for publishers onboarding support
- task: Design how publishers view geography reference data
- task: Design exception journeys for geography and date reference data
- task: Design search for dataset (publisher) / user (admin)
- task: Get reference data
- task: Design table sort (cancelling a selected sort)
- task: Design archived dataset (consumer side)
- task: Document patterns and components
- task: [Spike] Agree strategy for log-in journey
- fix: Filter fixes to consumer view for testing 13th June
- fix: Data provider and data source do not appear on consumer view
- fix: Error when trying to approve datasets (several affected) even though preview is fine
- fix: Translation file export/import task status issues
- fix: replace instances of javascript:history:back
- fix: Remove background styling from dataset preview and live dataset
- fix: Dataset not exporting all English dimension names
- fix: Ignoring this column in the data table led to an error
- fix: Preview does not work - columns have been ignored
- fix: Incomplete status labels on completed translation sections
- fix: Data table task shows as completed even though column types have not been assigned yet
- fix: Publisher preview does not show selected source
- fix: Uploading a dataset on underground stations clicking on links from the tasklist results in “Page Not found”
- fix: Data source added not visible in dataset preview
- fix: Measure look up error ‘NaN’ = not a number message
- fix: Back history after incorrectly matched date dimension
What we’re planning to do this week
- feature: Data provider not listed
- feature: Fixes to first pass data table
- task: Run unmoderated accessibility testing with consumers
- task: Manual accessibility testing
- task: Prepare for beta assessment
- task: Implement Anti-Virus scanning ahead of ITHC
- task: Plan unmoderated end-to-end diary study
- task: Explore designs for showing custom data value notes in the consumer view
- task: Plan design handover with Andy Fox
- task: Prepare discussion guide and testing materials for end-to-end UI consumer testing
- task: Summarise outputs for the implementation of hybrid pivot tables and hierarchies.
- task: Write tests for existing code to improve test coverage
- task: Agree next steps for generating pivots when datasets are ready for publishing
- task: Give devs access to to pre-prod environment
- task: Create a pre-prod environment
- task: Review lighthouse report and identify further actions
- task: Run Welsh language testing with publishers
Goals
These are the goals that we set for this sprint:
- Prepare the environment for the ITHC In progress
- Sorting and Filtered downloads In progress
- Preview release and support for comms for publisher In progress
- Attend Beta service assessment In progress
Risk and Issues
Current table showing project Risks and Issues:
Show and Tell from last week
Sprint 33-mid - Horse
Sprint 33-mid - Horse
What we did last week
- task: Onboard publishers cohort 4
- task: Plan data table only consumer testing
- task: Design table sort (cancelling a selected sort)
- task: Document patterns and components
- task: [Spike] Agree strategy for log-in journey
- fix: JWT Cookie size limit exceeded when a member of many groups
- fix: Blank dimension column in preview of table even though all data tasks complete
- fix: Remove background styling from dataset preview and live dataset
- fix: Number ‘1’ in lookup tables/measure tables being translated in error when toggling to Welsh
What we’re planning to do this week
- feature: Change ‘Missing data’ for ‘x’ shorthand to ‘Not available’
- feature: Data provider not listed
- feature: Fixes to first pass data table
- feature: Sticky first row
- task: Manual accessibility testing
- task: Prepare for beta assessment
- task: Plan unmoderated end-to-end diary study
- task: Plan unmoderated accessibility testing with WG staff networks
- task: Explore designs for showing custom data value notes in the consumer view
- task: Prepare Marvell environment for consumer testing
- task: Onboard KAS admin for publishers onboarding support
- task: Plan approval testing task for onboarded publishers cohorts 1 and 2
- task: Prepare discussion guide and testing materials for end-to-end UI consumer testing
- task: Get reference data
- task: Write tests for existing code to improve test coverage
- task: Change builds to use WG and github trigger via PAT
- task: Give devs access to to pre-prod environment
- task: Create a pre-prod environment
- task: Review lighthouse report and identify further actions
- task: Run Welsh language testing with publishers
- task: Make our container images more secure
- fix: Data provider and data source do not appear on consumer view
- fix: Error when trying to approve datasets (several affected) even though preview is fine
Goals
These are the goals that we set for this sprint:
- Implement filtering and sorting for consumers In progress
- First round of consumer testing In progress
- Complete ITHC scoping and schedule test In progress
- Complete on-boarding of fourth cohort of publishers In progress
Risk and Issues
Current table showing project Risks and Issues:
Total impact chartered over time
Sprint 33 - Horse
Sprint 33 - Horse
What we did last week
- task: Explore requirements for date reference data not fully known
- task: Plan data table only consumer testing
- task: Map the landscape of data services in Wales
- task: Explore how we might automate the pivoted views we have validated with consumers
- fix: Unable to Change dataset before publication when update pending approval
- fix: Dataset history: shows dataset published before being approved
- fix: Weird error message when standardised geo data doesn’t match data table codes
- fix: Cannot select monthly time period without quarterly totals in standardised dates
- fix: Fix duplicate radio value in quarter template
- fix: Approver workflow error message when responding to publishing request
- fix: Row numbers missing on preview and published datatable view
- fix: Export translations edit link for rounding goes to quality
- fix: request_data_provider_url missing in translation link on data-providers page
- fix: Using back button shows cached version of tasklist page so may show a completed task as incomplete
- fix: Fix table styling [GEL QA critical]
What we’re planning to do this week
- feature: Filters first iteration: Non Javascript version
- feature: View API guidance
- feature: Access datasets via API
- task: Explore designs for showing custom data value notes in the consumer view
- task: Onboard KAS admin for publishers onboarding support
- task: Plan approval testing task for onboarded publishers cohorts 1 and 2
- task: Onboard publishers cohort 4
- task: Onboard KAS team member to SW3 support
- task: Prepare discussion guide and testing materials for end-to-end consumer testing
- task: Get reference data
- task: Write tests for existing code to improve test coverage
- task: Change builds to use WG and github trigger via PAT
- task: Give devs access to to prod / prepod envs
- task: Create a prod / preprod environment
- task: Review lighthouse report and identify further actions
- task: Run Welsh language testing with publishers
- task: Make our container images more secure
- fix: Preview does not work - columns have been ignored
Goals
These are the goals that we set for this sprint:
- Implement filtering and sorting for consumers In progress
- First round of consumer testing In progress
- Complete ITHC scoping and schedule test In progress
- Complete on-boarding of fourth cohort of publishers In progress
Risk and Issues
Current table showing project Risks and Issues:
Show and Tell from last week
Sprint 32-mid - Gecko
Sprint 32-mid - Gecko
What we did last week
- feature: Flatten a Cube
- feature: Investigate how much data we actually have.
- task: Start onboarded publishers cohort 3 first task
- task: Update create / update user flow map
- task: Reference data policy questions
- task: Map the landscape of data services in Wales
- task: Explore how we might automate the pivoted views we have validated with consumers
- task: Create Pivot Table queries in SQLite using a new cube
- task: Create Pivot Table queries in SQLite
- task: SPIKE: Pivot by dimension
- fix: Row numbers missing on preview and published datatable view
- fix: request_data_provider_url missing in translation link on data-providers page
- fix: An unknown error occurred, try again later with preview
- fix: Cube could not be generated in Table preview
- fix: Page not found when try to submit for publishing
- fix: Translation tasks go incomplete then when CSV exported again they show as complete
- fix: Table previews don’t work on multiple datasets when only some of the data tasks are completed
- fix: Using back button shows cached version of tasklist page so may show a completed task as incomplete
What we’re planning to do this week
- feature: Implement find a dataset by browsing taxonomy
- feature: Filters first iteration: Non Javascript version
- task: Onboard KAS admin for publishers onboarding support
- task: Onboard publishers cohort 4
- task: Plan data table only consumer testing
- task: Prepare discussion guide and testing materials for end-to-end consumer testing
- task: Get reference data
- task: Write tests for existing code to improve test coverage
- task: Change builds to use WG and github trigger via PAT
- task: Give devs access to to prod / prepod envs
- task: Create a prod / preprod environment
- task: Run Welsh language testing with publishers
- task: Make our container images more secure
- fix: replace instances of javascript:history:back
- fix: Preview does not work - columns have been ignored
Goals
These are the goals that we set for this sprint:
- Implement filtering and sorting for consumers In progress
- Start consumer testing In progress
- Approve a dataset In progress
- Get acceptance of a solution design document In progress
- Prepare for on-boarding of fourth cohort of publishers In progress
Risk and Issues
Current table showing project Risks and Issues:
Risk profile score
Sprint 32 - Gecko
Sprint 32 - Gecko
What we did last week
- feature: On dataset overview, “Continue…” link should be first on list
- feature: Remove all categories apart from geography from reference data in SW3
- task: Iterate solution design document to include service optimisation details and latest solution
- task: Reference data policy questions
- task: Analyse feedback from first onboarding cohort
- fix: An unknown error occurred, try again later with preview
- fix: Translation file won’t upload
- fix: Page not found when clicking submit for publishing even though all tasks completed
- fix: preview tab styling
- fix: broken pagination styling since GEL update
- fix: Add background styling to all form pages
- fix: back to top button styling [GEL QA critical]
What we’re planning to do this week
- feature: Move return to tasklist link
- feature: Filters first iteration: Non Javascript version
- feature: Approve a dataset
- task: Plan data table only consumer testing
- task: Prepare discussion guide and testing materials for end-to-end consumer testing
- task: Get reference data
- task: Summarise outputs for the implementation of hybrid pivot tables and hierarchies.
- task: Write tests for existing code to improve test coverage
- task: Map the landscape of data services in Wales
- task: Agree next steps for generating pivots when datasets are ready for publishing
- task: Change builds to use WG and github trigger via PAT
- task: Give devs access to to prod / prepod envs
- task: Create a prod / preprod environment
- task: Arrange final dataset design workshop with publishers
- task: Run Welsh language testing with publishers
- fix: Preview does not work - columns have been ignored
Goals
These are the goals that we set for this sprint:
- Implement filtering and sorting for consumers In progress
- Start consumer testing In progress
- Approve a dataset In progress
- Get acceptance of a solution design document In progress
- Prepare for on-boarding of fourth cohort of publishers In progress
Risk and Issues
Current table showing project Risks and Issues:
Show and Tell from last week
Sprint 31 Mid - Flying Squirrel
Sprint 31 Mid - Flying Squirrel
What we did last week
- task: Review and tidy questions, assumptions and decisions log
- task: Update to-be service map
- task: Onboard publishers cohort 3
- task: Plan third cohort for publisher onboarding
- task: Prepare and submit info for GEL QA (publisher side)
- task: Analyse feedback from first onboarding cohort
- fix: errors.csv.unknown for some but not all dimension tasks
- fix: errors.csv.unknown error on dimension task
- fix: Page not found when click Submit for publishing but all tasks complete
- fix: Investigate timeouts and improve performance for the publishing journey
- fix: Errors.sources.assign_failed when matching columns in data table
- fix: Error message on data table view and dimensions when trying to upload a look up table
- fix: Cube could not be generated
- fix: Dataset won’t publish even though all create tasks completed
- fix: Dimension name AreaCode not included in the exported file
- fix: Dataset created in English not exporting all English dimension names
- fix: Data table NaN error
- fix: Related links missing translations not included in translation status logic
- fix: Stale translations exports should be marked as incomplete in the export section
What we’re planning to do this week
- feature: Filters first iteration: Non Javascript version
- task: Iterate solution design document to include service optimisation details and latest solution
- task: Plan data table only consumer testing
- task: Prepare discussion guide and testing materials for end-to-end consumer testing
- task: Get reference data
- task: Summarise outputs for the implementation of hybrid pivot tables and hierarchies.
- task: Write tests for existing code to improve test coverage
- task: Map the landscape of data services in Wales
- task: Agree next steps for generating pivots when datasets are ready for publishing
- task: Change builds to use WG and github trigger via PAT
- task: Give devs access to to prod / prepod envs
- task: Create a prod / preprod environment
- task: Arrange final dataset design workshop with publishers
- task: Run Welsh language testing with publishers
- fix: Fix table styling [GEL QA critical]
- fix: Fix preview tab styling
Goals
These are the goals that we set for this sprint:
- Working filtering implementation In progress
- Onboard the 3rd Cohort of Publishers In progress
- Prepare for consumer testing In progress
Risk and Issues
Current table showing project Risks and Issues:
Chart showing impact score
Sprint 31 - Flying Squirrel
Sprint 31 - Flying Squirrel
What we did last week
- feature: Metadata page iteration
- feature: Add standardised notes (cell, data value level)
- task: Plan third cohort for publisher onboarding
- task: Update guidance to explain that translation file is specific to version of dataset
- task: Prepare and submit info for GEL QA (publisher side)
- task: Plan second cohort for publisher onboarding
- task: Communicate to publishers the transformation of their lookup tables in preview
- task: Configure a suitable testing suite for e2e tests
- fix: Errors.sources.assign_failed when matching columns in data table
- fix: Make user email addresses case-insensitive (for sign in)
- fix: Error message on data table view and dimensions when trying to upload a look up table
- fix: Cube could not be generated [SCHS0333]
- fix: Dataset won’t publish even though all create tasks completed [Thomas Rose, educ0193]
- fix: Data table NaN error
- fix: Publishing date needs to account for timezone at time of publication
- fix: Forced to do translations in update even though metadata is unchanged
- fix: Create a Date field and then clicking on it in page not found
What we’re planning to do this week
- feature: On dataset overview, “Continue…” link should be first on list
- feature: Map the landscape of data services in Wales
- feature: Filters first iteration: Non Javascript version
- task: Plan data table only consumer testing
- task: Prepare discussion guide and testing materials for end-to-end consumer testing
- task: Get reference data
- task: Summarise outputs for the implementation of hybrid pivot tables and hierarchies.
- task: Write tests for existing code to improve test coverage
- task: Change builds to use WG and github trigger via PAT
- task: Give devs access to to prod / prepod envs
- task: Create a prod / preprod environment
- task: Analyse feedback from first onboarding cohort
- task: Arrange final dataset design workshop with publishers
- task: Run Welsh language testing with publishers
- fix: Investigate timeouts and improve performance for the publishing journey
- fix: Fix back to top button styling
Goals
These are the goals that we set for this sprint:
- Working filtering implementation In progress
- Onboard the 3rd Cohort of Publishers In progress
- Prepare for consumer testing In progress
Risk and Issues
Current table showing project Risks and Issues:
Show and Tell from last week
Sprint 30 Mid - Echidna
Sprint 30 Mid - Echidna
What we did last week
- feature: Add view dataset preview action on overview page for incomplete dataset
- feature: Add revision.created_by to developer view
- feature: Show overview page when clicking link to new dataset
- feature: Select group in create journey
- feature: Remove Publisher organisation and contact
- feature: Move dataset between groups
- feature: Permission denied page
- feature: Deactivate user
- feature: Change the groups and roles for a user
- feature: Implement Add user
- feature: View users in a group
- feature: Improve post-completion preview of translations
- feature: Delete a draft update to a dataset (unpublished)
- feature: Metadata: Textarea size
- feature: Change that the ‘Export text fields for translation’ status shows as ‘Available’ with a green tag
- task: Reference data policy questions
- task: Prepare bi-lingual consumer screener and email
- task: Design review changes from groups - SW-582 and 584
- fix: Error when trying to add custom lookup file
- fix: Data table summary error and dimension tasks fail
- fix: Create a Date field and then clicking on it in page not found
- fix: Session properties are not namespaced so apply across all datasets
- fix: Noncompliant look-up table file: no error message
What we’re planning to do this week
- feature: Filters first iteration: Non Javascript version
- feature: Approve a dataset
- task: Design publisher search for dataset / user
- task: Update guidance to explain that translation file is specific to version of dataset
- task: Plan data table only consumer testing
- task: Prepare discussion guide and testing materials for consumer testing
- task: Get reference data
- task: Prepare and submit info for GEL QA (publisher side)
- task: Summarise outputs for the implementation of hybrid pivot tables and hierarchies.
- task: Write tests for existing code to improve test coverage
- task: Give devs access to to prod / prepod envs
- task: Create a prod / preprod environment
- task: Analyse feedback from first onboarding cohort
- task: Configure a suitable testing suite for e2e tests
- fix: Dimension name AreaCode not included in the exported file
- fix: Dataset created in English not exporting all English dimension names
- fix: Add background styling to all form pages
- fix: Fix back to top button styling
Goals
These are the goals that we set for this sprint:
- First iteration of consumer view In progress
- First iteration of approval journey In progress
- Agreement on approach to pivot table and hierarchies In progress
- Ready to screen consumers for user testing In progress
Risk and Issues
Current table showing project Risks and Issues:
Risks and Issues Impact Chart
Sprint 30 - Echidna
Sprint 30 - Echidna
What we did last week
- feature: Measure look up preview is different than expected
- feature: Remove a published dataset
- task: Make migrations run on deploy
- task: Design table sort (cancelling a selected sort)
- task: Onboard second cohort of publishers
- task: Plan consumer recruitment for end-to-end mvp beta testing
- task: Gather user scenarios for using pivot tables and hierarchies
- task: Determine whether we need to retain existing cube ID format, and how and when we will generate cube IDs
- fix: Related links not included in translation status logic
- fix: Stale translations exports should be marked as such
- fix: Change link on translation export should go straight to change name for dimensions
- fix: errors.fact_table_validation.unknown_error
- fix: Data table validation error
- fix: Page not found on WRAx0101 dimensions
- fix: Fix StatsCymru logo and favicon
- fix: Session properties are not namespaced so apply across all datasets
What we’re planning to do this week
- feature: Remove Publisher organisation and contact
- task: Prepare bi-lingual consumer screener and email
- task: Prepare and submit info for GEL QA (publisher side)
- task: Summarise outputs for the implementation of hybrid pivot tables and hierarchies.
- task: Write tests for existing code to improve test coverage
- task: Give devs access to to prod / prepod envs
- task: Create a prod / preprod environment
- task: Analyse feedback from first onboarding cohort
- task: Configure a suitable testing suite for e2e tests
Goals
These are the goals that we set for this sprint:
- First iteration of consumer view In progress
- First iteration of approval journey In progress
- Agreement on approach to pivot table and hierarchies In progress
- Ready to screen consumers for user testing In progress
Risk and Issues
Current table showing project Risks and Issues:
Show and Tell from last week
Sprint 29 Mid - Donkey
Sprint 29 Mid - Donkey
What we did last week
- Related links not included in translation status logic
- Design table sort (cancelling a selected sort)
- Design review tasks for add and update users SW-585 and SW-586
- Stale translations exports should be marked as such
- Change link on translation export should go straight to change name for dimensions
- errors.csv.unknown error in lookup table
- Onboard second cohort of publishers
- Change group details
- Implement Add a group
- Analyse evidence from permissions workshops
- Measure look up preview is different than expected
- Fix StatsCymru logo and favicon
- Explore hierarchies in SW2 datasets
- Determine whether we need to retain existing cube ID format, and how and when we will generate cube IDs
What we’re planning to do this week
- Prepare and submit info for GEL QA (publisher side)
- Summarise outputs for the implementation of hybrid pivot tables and hierarchies.
- Plan consumer recruitment for end-to-end mvp beta testing
- Write tests for existing code to improve test coverage
- Gather user scenarios for using pivot tables and hierarchies
- Give devs access to to prod / prepod envs
- Create a prod / preprod environment
- Permission denied page
- Analyse feedback from first onboarding cohort
- Dimension: Row sample text incorrect
- Configure a suitable testing suite for e2e tests
Goals
These are the goals that we set for this sprint:
- Address issues so that second cohort of publishers can be onboarded In progress
- Complete minimal implementation of consumer data view In progress
- Conclude exploration around pivot tables and hierarchies In progress
- Roles and permissions - add and remove users from groups with roles In progress
Risk and Issues
Current table showing project Risks and Issues:
Risk impact chart
Sprint 29 - Donkey
Sprint 29 - Donkey
What we did last week
- Duplication of rows in publisher-only preview
- Plan second cohort for publisher onboarding
- Document patterns and components
- ’errors.csv.unknown’ when matching measure lookup
- Design / prototype journey for temporarily un-publishing dataset
- Design / prototype journey for archiving dataset
- Months for date formatting errors out with Tax year
- Design / prototype approve/reject journey
- Add ‘Medr’ to the list of publisher organisations
- Page not found for all dimensions following upload of data table
- Change group details
- Implement Add a group
- Dimension look ups ‘page not found’ in update user journey
- Engage with gov.wales statistics and research team
- Highlight “Home” and “Guidance” nav items when on the respective page
- Explore Data lake APIs for sorting and filtering a Parquet file
- Add error message for users who try to reuse an exported translation file
- Comms with participants in migration workstream following ETL decision
- Partly incorrect error message - date formatting cannot be matched to a dimension
- Page not found when submitting update for publishing without filling in publishing date
- Uploading the measures displays both languages on the first pass, but subsequently only shows current language
- Fix spacing between buttons
- [SPIKE] Test memory usage - perform load tests to understand usage
- Check the lookup table: weird symbols in Welsh description
- Translation: Export file has uploaded dimension names populated in Welsh column
- Errors that are corrected by refreshing
- Dimension containing dates: Year included in summary for quarterly data without yearly totals in it
- ‘Unknown error’ when wrong lookup for data type is uploaded
- Problems after date matching [FIX THIS]
- Problems going back in date matching - periods of time [FIX THIS]
- Implement RBAC (role-based access control)
What we’re planning to do this week
- Design archived dataset (consumer side)
- Onboard second cohort of publishers
- Plan consumer recruitment for end-to-end mvp beta testing
- Design review changes from groups - SW-582 and 584
- Gather user scenarios for using pivot tables and hierarchies
- Analyse feedback from first onboarding cohort
- Improve post-completion preview of translations
- Dimension: Row sample text incorrect
- Explore designs for viewing hierarchies
Goals
These are the goals that we set for this sprint:
- Address issues so that second cohort of publishers can be onboarded In progress
- Complete minimal implementation of consumer data view In progress
- Conclude exploration around pivot tables and hierarchies In progress
- Roles and permissions - add and remove users from groups with roles In progress
Risk and Issues
Current table showing project Risks and Issues:
Show and Tell from last week
Sprint 28 - Capybara
Sprint 28 - Capybara
What we did last week
- Analyse evidence from permissions workshops
- Create journey: data table cannot be amended in preview environment
- Dataset preview link is broken
- Dimension look ups ‘page not found’ in update user journey
- Make tasklist sidebar actions a list
- Add error message for users who try to reuse an exported translation file
- Updating - “Page not found” on measures task
- Add ‘Management Information’ to the list of designations
- First pass GEL fixes (SCSS files only)
- Delete a draft dataset (unpublished)
- Dimension type: Simplifying options and tweaking dimension flow
- Remove secondary buttons
- Page not found when submitting update for publishing without filling in publishing date
- Uploading the measures displays both languages on the first pass, but subsequently only shows current language
- Fix spacing between buttons
- Support onboarding of first cohort
- Create dev tickets for managing groups and user roles
- Implement improved status tags in the update user journey
- Dimension: Name of dimension on every page in dimension flow
- Check the lookup table: weird symbols in Welsh description
- Translation: Export file has uploaded dimension names populated in Welsh column
- Errors that are corrected by refreshing
- Design first iteration Find a dataset by browsing categories (taxonomy)
- Problems going back in date matching - periods of time
- Implement RBAC (role-based access control)
What we’re planning to do this week
- Document patterns and components
- Implement Add user
- Engage with gov.wales statistics and research team
- Fix StatsCymru logo and favicon
- Create Pivot Table queries in SQLite using a new cube
- Update translations for first onboarding cohort
- SPIKE: Pivot by dimension
- Explore designs for viewing hierarchies
Goals
These are the goals that we set for this sprint:
- First iteration of consumer view In progress
- Delete dataset In progress
- Get feedback on create a dataset from first cohort In progress
- Progress exploration of hierarchies and pivot tables. In progress
Risk and Issues
Current table showing project Risks and Issues:
Total risk impact over time
Sprint 28 - Capybara
Sprint 28 - Capybara
What we did last week
- Create journey: data table cannot be amended in preview environment
- Dataset preview link is broken
- Dataset preview does not work
- Dimension look ups ‘page not found’ in update user journey
- Add error message for users who try to reuse an exported translation file
- Measure link in task list doesn’t change when the measure name has been updated.
- Missing H1 on dataset overview page
- Missing H1 from dataset preview page
- First pass GEL fixes (component work)
- Updating - “Page not found” on measures task
- Provide a way for UX to run the full stack locally
- Explore hierarchies in SW2 datasets
- Delete a draft dataset (unpublished)
- Set up deployment of the SW3 app in the WG estate for onboarding publishers
- Dimension type: Simplifying options and tweaking dimension flow
- Wrong format look up table throws unhelpful error message
- Dimension name: Error states not triggering
- Run permissions and publication management table top simulations
- Download links on the consumer view of the dataset are broken
- Replacing a table with one with fewer columns
- Data values show incorrectly in preview
- Dimension: Name of dimension on every page in dimension flow
- Cube builder seems to rely on sort order of dimensions or fact table info
- Check the lookup table: weird symbols in Welsh description
- Translation: Export file has uploaded dimension names populated in Welsh column
- Dataset preview: Time period covered styling and year type not present
- Errors that are corrected by refreshing
- Problems going back in date matching - periods of time
- Implement RBAC (role-based access control)
What we’re planning to do this week
- Implement Add a group
- Fix StatsCymru logo and favicon
- Create Pivot Table queries in SQLite using a new cube
- Update translations for first onboarding cohort
- Support onboarding of first cohort
- Create dev tickets for managing groups and user roles
- ‘Unknown error’ when wrong lookup for data type is uploaded
- Explore designs for viewing hierarchies
Goals
These are the goals that we set for this sprint:
- First iteration of consumer view In progress
- Delete dataset In progress
- Get feedback on create a dataset from first cohort In progress
- Progress exploration of hierarchies and pivot tables. In progress
Risk and Issues
Current table showing project Risks and Issues:
Show and Tell from last week
Sprint 27-mid - Bat
Sprint 27-mid - Bat
What we did last week
- “Dimension containing dates” option is missing
- Explore hierarchies in SW2 datasets
- Integrate health probes
- Exploration into managing permissions in EntraID
- ‘An unknown error occurred, try again later’ when uploaded amended data table
- Error and weird behaviours when replacing data table
- Write responses or advise on responses to 11 Feb Cardiff event feedback
- Arrange permissions and publication management table top simulations
- Workshop / Discussion: What if measure types in the same dataset have different time periods?
- Quarter format: Changing content following research findings
- Show blank cell for Welsh if dimension name not provided (was:Incorrect pre-population in exported translation file)
- Dimensions: Update content on mismatch screens
- Design download whole dataset or manipulated dataset (consumer)
- Data table: Previously implemented summary and data table change screens no longer present
- Choosing the default of percentage for a dataset causes and error
- Clicking on “Measure or datatype” results in “Page not found”
- Time questions in a loop when completed
- Gather information about SW2 datasets from OData to inform migration approach
- Create ETL migration report
- Update a dataset - Update metadata
- Update a dataset - Update data table
- Consumer view of dataset
- Apply GEL WG override CSS file to working software
- Dimensions: Choose common reference data
What we’re planning to do this week
- Explore how we might automate the pivoted views we have validated with consumers
- Provide a way for UX to run the full stack locally
- Update translations for first onboarding cohort
- Run permissions and publication management table top simulations
- Support onboarding of first cohort
Goals
These are the goals that we set for this sprint:
- Onboard first cohort of Welsh-speaking publishers In progress
- Updated dimension flow - working software Done
- Implement health-probes for service Done
- Preview environment running on EntraID - roles and permissions Done
Risk and Issues
Current table showing project Risks and Issues:
Risks Profile
Risk profile chart
Sprint 27 - Bat
Sprint 27 - Bat
What we did last week
- Arrange session with web team
- Create stimulus for permissions table top simulations
- Manage users and groups flows for admins
- Develop feedback form that anonymous publishers with access needs can use
- Roles and permissions matrix
- Use English title in the dataset list views until the Welsh is provided
- Metadata properties that don’t need translation should show as complete for both languages
- Remove all categories apart from geography from reference data in SW3
- Memory leak - showed up in S&T
- Metadata/Translation: Related report link texts should be in translation export
- Non functional testing strategy - timeline for accessibility / Welsh / penetration testing
- Refine product roadmap and backlog
- Lookup upload xlsx (unsupported file type) error
- ‘Check the data table’ page displays note code ’t’ as ’true’
What we’re planning to do this week
- Dimension type: Simplifying options and tweaking dimension flow
- Arrange permissions and publication management table top simulations
- Support onboarding of first cohort
- Dimension: Name of dimension on every page in dimension flow
Goals
These are the goals that we set for this sprint:
- Onboard first two Welsh-speaking publishers In progress
- Updated dimension flow - working software In progress
- Implement health-probes for service In progress
- Preview environment running on EntraID - roles and permissions In progress
Risk and Issues
Current table showing project Risks and Issues:
Show and Tell from last week
Sprint 26-mid - Aardvark
Weekly report
Aardvark
What we did last week
- Quarter format question - Radio button for ‘-x’ not selectable
- Create SQLite Cube for testing and development
- Manage users and groups flows for admins
- Roles and permissions matrix
- Fill out Section 4. of the ETL document (Data Transformation)
- Error when a lookup CSV is not added and continue is clicked in update journey
- Data table upload only works in incognito mode
- Spaces in dimension headers in the data table
- Quarter format: Changing content following research findings
- Preview error ‘We were unable to generate a preview of the uploaded CSV. Is this a valid CSV?’
- Dimension: Date reference branch - Not selecting a radio button throughout causes issues or incorrect error messages
- Measure: Clicking continue without adding a file takes you to “An unknown error occurred"
- Data table: Uploading a new/different data table does not remove all dimension data or reset status tags
- Measure lookup - System tried to join on wrong column
- Design first iteration Find a dataset by browsing categories (taxonomy)
- Two errors when matching data types but data table and data type lookup look right
- Dimension: Summary when clicking back into a dimension after reference data added
- Dimension: Name
- Download whole dataset as human readable CSV in English or Welsh
- Feedback form for publishers [unmoderated testing]
- Exploration into programmatic access with SW data consumers
What we’re planning to do this week
- Create stimulus for permissions table top simulations
- Write responses or advise on responses to 11 Feb Cardiff event feedback
- Arrange permissions and publication management table top simulations
- [SPIKE] Test memory usage
- Implement improved status tags in the update user journey
- Metadata properties that don’t need translation should show as complete for both languages
- Show blank cell for Welsh if dimension name not provided (was:Incorrect pre-population in exported translation file)
- Metadata/Translation: Related report link texts should be in translation export
- Design download whole dataset or manipulated dataset (consumer)
- Data table: Previously implemented summary and data table change screens no longer present
- Refine product roadmap and backlog
- Update a dataset - Update metadata
These are the goals that we set for this sprint
- First pass - application of GEL to the existing publishing Beta In progress
- Review ETL plan and agree next steps In progress
- Define approach for on-boarding publishers In progress
Screen shot of risks and issues board
Chart showing change in risk profile
Show and Tell 6th March 2025
Sprint 26 - Aardvark
Aardvark
What we did last week
- Draft spec of dataset ID
- Hold dataset ID workshop
- Fill out Section 4. of the ETL document (Data Transformation)
- “Most recent update” is showing first published date and not most recent publish date
- Workshop / Discussion: What if measure types in the same dataset have different time periods?
- Guidance: Styling not pulling in govuk classes
- Permissions design / prototype - DUPLICATE
- Translate current working software into Welsh (end of Jan)
- Set up dimensions containing time: Sample of rows is wrong
What we’re planning to do this week
- Arrange permissions and publication management table top simulations
- Roles and permissions matrix
- Metadata properties that don’t need translation should show as complete for both languages
- Metadata/Translation: Related report link texts should be in translation export
- Refine product roadmap and backlog
- Dimension: Name
- Update a dataset - Update metadata
Goals
These are the goals that we set for this sprint:
- Arrange roles and permissions testing sessions with publishers In progress
- First pass - application of GEL to the existing publishing Beta In progress
- Review ETL plan and agree next steps In progress
- Define approach for on-boarding publishers In progress
Risk and Issues
Current table showing project Risks and Issues
Show and Tell from last week
Sprint 25-mid - the Z hypothesis
Weekly report
Z hypothesis
What we did last week
- Draft spec of dataset ID
- Hold dataset ID workshop
- Workshop / Discussion: What if measure types in the same dataset have different time periods?
- Performance improvements
- Translation import validation needs to ignore any extra rows in the CSV
- Remove redundant env vars in pipelines
- Dataset overview page status tags
- Hold SW3 private beta event in Cardiff and remotely
- Approval process mapping
- Temporary login workaround
- Create facilitation guide for the event in Cardiff
- Permissions reqs / mapping
- Set up dimensions containing time: Sample of rows is wrong
- Returning to column assignment from the tasklist duplicates dimensions
- Feedback form for publishers
- Stand up the service in WG Azure
What we’re planning to do this week
- Flatten a Cube
- “Most recent update” is showing first published date and not most recent publish date
- Refine product roadmap and backlog
- Dimension: Name
- Create ETL migration plan document - draft for 26/02/2025 (ideally much sooner)
- Roles and permissions matrix
- Prototype permissions screens
These are the goals that we set for this sprint
- Hold successful in-person and on-line event in Cardiff Done
- Bug fixes and performance improvements on working software In progress
- Define priorities for public Beta Done
- Complete testing Done
- Define next steps for data consumer viewsIn progress
- Domain for Beta Done
Screen shot of risks and issues board
Chart showing change in risk profile
Sprint 25 - Z-test
Z-test
What we did last week
- Performance improvements
- Translation import validation needs to ignore any extra rows in the CSV
- Hold SW3 private beta event in Cardiff and remotely
- Create facilitation guide for the event in Cardiff
- Broken test dataset
- Dataset preview: Rounding applied not appearing
- Complete consumer view testing sessions
- Preview pagination is broken
- Implement ‘Guidance’ link in the header
- Preview a dataset [MVP]
- Dimension: Implement time source
- Feedback form for publishers
- Dimension: Dates reference data branch
- Stand up the service in WG Azure
- Data domain model
- Dimensions: Upload a lookup table
- Dimension: Measures
What we’re planning to do this week
- Approval process mapping
- Permissions reqs / mapping
- Refine product roadmap and backlog
- Dimension: Row sample text incorrect
- Set up dimensions containing time: Sample of rows is wrong
- Dimension: Name
- Stand up the service in WG Azure
Goals
These are the goals that we set for this sprint:
- Hold successful in-person and on-line event in Cardiff Done
- Bug fixes and performance improvements on working software In progress
- Define priorities for public Beta In progress
- Complete testing In progress
- Define next steps for data consumer views In progress
- Domain for Beta In progress
Risk and Issues
Current table showing project Risks and Issues
Show and Tell from last week
Sprint 24 - Y-intercept
y-intercept
What we did last week
Run first set of round 3 consumer testing sessions
- Schedule sessions with consumers to test viewing a dataset
- Complete testing of create journey with early testers
- Publisher homepage / list of datasets
- Request support (via email)
- Provide support (via email) & set up Jira support board
- Capture audit of actions in the publishing system (MVP)
- Explore access permissions requirements
- Decide approach for 3 metadata fields currently not in design
- Content and error message updates to create journey screens
- Browse basic list of datasets - consumer view
- Publishing: Translation export and import
What we’re planning to do this week
- Organise testing of update a dataset
- Complete consumer view testing sessions
- Refine product roadmap and backlog
- Plan StatsWales away day in Cardiff (Marvell & WG teams)
- Change dataset after it has been set for publication
- Handover from Register Dynamics
- Dimension: Name
- Update a dataset - Update data table
- Stand up the service in WG Azure
Goals
These are the goals that we set for this sprint:
- Update journey - working software In progress
- Complete consumer view - working software In progress
- Complete testing with consumers In progress
Risk and Issues
Current table showing project Risks and Issues
Show and Tell from last week
Screenshots from consumer view design concepts
Summary
View by dimension
Filtered view
Sprint 23- the letter X
Weekly report
The letter X
What we did last week
- Prepare discussion guide for consumer testing
- Dataset removed from list of datasets and no longer available during creation
- Iterate error messages - create user journey
- Prototype WIMD dataset for consumer testing
- Complete testing of create journey with early testers
- Content and error message updates to create journey screens
What we’re planning to do this week
- Refine product roadmap and backlog
- Design first iteration Find a dataset by browsing categories (taxonomy)
- Schedule sessions with consumers to test viewing a dataset
- Prepare guide for unmoderated testing with publishers
- Complete testing of create journey with early testers
- Plan StatsWales away day in Cardiff (Marvell & WG teams)
- Handover from Register Dynamics
- Dimension: Name
- Capture audit of actions in the publishing system (MVP)
- Stand up the service in WG Azure
These are the goals that we set for this sprint
- Do “Low hanging fruit” on the create journey. In progress
- Implement update user journey In progress
- Migrate to WG estate In progress
- First consumer user testing session In progress
Screen shot of risks and issues board
Chart showing change in risk profile
Sprint 24-mid - the Y intercept
Weekly report
The Y intercept
What we did last week
- Create facilitation guide for the event in Cardiff
- Permissions reqs / mapping
- Refine product roadmap and backlog
- Create journey complains at what appears to be a valid date
- Update a dataset - Update data table
- Stand up the service in WG Azure
What we’re planning to do this week
- Move guidance route outside of auth
- The org / team assignment is not remembering what you selected
- Publisher dataset list is showing scheduled dataset as “Live” status
- Dataset preview: Related reports not pulling through
- Dataset preview: Data providers and sources not showing correctly
- Dataset preview: Next update expected (provisional) not showing when dataset won’t be updated
- Organise testing of update a dataset
- Extract ZIP and CSV Data from Backoffice BACAC File
- Metadata: Cancel button missing from some screens
- Prepare guide for unmoderated testing with publishers
- Plan StatsWales away day in Cardiff (Marvell & WG teams)
- Change dataset after it has been set for publication
- Submit dataset for publication
- Handover from Register Dynamics
- User authentication journey from EntraID to SW3 - scope for MVP
- Dimension: Load reference data
These are the goals that we set for this sprint
- Update journey - working software Done
- Complete consumer view - working software Done
- Complete testing with consumers - testing Done
Screen shot of risks and issues board
Chart showing change in risk profile
Sprint 23 - X-axis
X-axis
What we did last week
- Prototype WIMD dataset for consumer testing
- Deploy front-end and back-end applications into automated infrastructure
- Provide CSV data for time dimension testing
- Prototype HLTH dataset for consumer testing
- Review & iterate user needs board
- Publisher research - dataset design and consumer experience workshop 1
- Onboard MVP collaborators
- Prepare to test the next iteration of working software (ref data and metadata) with SME publisher
What we’re planning to do this week
- Refine product roadmap and backlog
- Prepare discussion guide for consumer testing
- Schedule sessions with consumers to test viewing a dataset
- Prepare guide for unmoderated testing with publishers
- Complete testing of create journey with early testers
- Prepare discussion guide for testing with consumers
- Schedule sessions with consumers
- Prepare guide for unmoderated testing with publishers
- Plan StatsWales away day in Cardiff (Marvell & WG teams)
- Handover from Register Dynamics
- Dimension: Name
- User authentication journey from EntraID to SW3 - scope for MVP
- Stand up the service in WG Azure
Goals
These are the goals that we set for this sprint:
- Do “Low hanging fruit” on the create journey. In progress
- Implement update user journey In progress
- Migrate to WG estate In progress
- First consumer user testing session In progress
Risk and Issues
Current table showing project Risks and Issues
Show and Tell from last week
Screenshots from working software
Sprint 22-mid Weighted Average
Weekly report
Weighted Average
What we did last week
- Explore the design of views with larger / more complex datasets
- View guidance (design)
- Publishing: When should this dataset be published?
- GEL fixes from review Oct 2024
- Review & iterate user needs board
- Publisher research - dataset design and consumer experience workshop 1
- Onboard MVP collaborators
What we’re planning to do this week
- Prototype HLTH and WIMD datasets for consumer testing
- Handover from Register Dynamics
- Dimension: Name
- Configure a suitable testing suite for e2e tests
- Prepare to test the next iteration of working software (ref data and metadata) with SME publisher
- Deploy front-end and back-end applications into automated infrastructure
- Provide CSV data for time dimension testing
- Content and error message updates to create journey screens
- Dimensions: Choose common reference data
- Stand up the service in WG Azure
These are the goals that we set for this sprint
- End to end automated testing In progress
- Deploy to WG infrastructure In progress
- Test the create journey In progress
- Prepare for next round of consumer testing on views In progress
- Feasibility analysis - OData to structural model In progress
Screen shot of risks and issues board
Chart showing change in risk profile
Link to show and tell 09/01/2025
Sprint 22 - Weighted Average
Weekly report
Weighted Average
What we did last week
- Iterate beta product roadmap beyond mvp build
- Putting a large amount of text in the title of the dataset results in unhelpful message
- Metadata: Add a data source from the selected data provider lacks validation
- Metadata: Publisher organisation and contact
- Metadata: Topics
- Suggested changes from metadata design review
- Provide a way for the user to log out from the frontend
- [SPIKE] Log in journey for private Beta
- Metadata: Sources added
- Metadata: Add a data source from the selected data provider
- Metadata: Add a data provider
What we’re planning to do this week
- Explore the design of views with larger / more complex datasets
- Onboard MVP collaborators
- Handover from Register Dynamics
- Prepare to test the next iteration of working software (ref data and metadata) with SME publisher
- Deploy front-end and back-end applications into automated infrastructure
- Provide CSV data for time dimension testing
- Publishing: Translation export and import
- Dimensions: Choose common reference data
- Stand up the service in WG Azure
- Dimensions: Upload a lookup table
- Dimension: Measures
Goals
These are the goals that we set for this sprint:
- End to end automated testing In progress
- Deploy to WG infrastructure In progress
- Test the create journey ready for testing In progress
- Prepare for next round of consumer testing on views In progress
- Feasibility analysis - OData to structural model In progress
Risk and Issues
Current table showing project Risks and Issues
Sprint 21-mid Venn diagram
Weekly report
Venn diagram
What we did last week
- Iterate beta product roadmap beyond mvp build
- Metadata: Add a data source from the selected data provider lacks validation
- Metadata: Publisher organisation and contact
- Metadata: Topics
- Suggested changes from metadata design review
- Time format - understand our approach [Decision]
- Provide a way for the user to log out from the frontend
- Metadata: Sources added
- Metadata: Add a data source from the selected data provider
- Metadata: Add a data provider
What we’re planning to do this week
- Explore the design of views with larger / more complex datasets
- Onboard MVP collaborators
- Handover from Register Dynamics
- Prepare to test the next iteration of working software (ref data and metadata) with SME publisher
- Hold workshop to plan data migration exercise
- Deploy front-end and back-end applications into automated infrastructure
- Dimension: Implement time source
- Dimension: Dates reference data branch
- Publishing: Translation export and import
- Dimensions: Choose common reference data
- Stand up the service in WG Azure
These are the goals that we set for this sprint
- Understand the scope for remaining items on the roadmap for MVP Done
- Identify and document a steel thread for create journey working software Done
- End to end test environment In progress
- Deploy to WG infrastructure In progress
Screen shot of risks and issues board
Chart showing change in risk profile
Screenshots from working software
Sprint 21 - Venn diagram
Weekly report
Venn Diagram
What we did last week
- [Bug] `Trying to create dataset with a certain CSV causes unspecific error message
- Iterate consumer dataset view prototype for GSS event
- Understand the scope for remaining items on the roadmap for MVP (break down into further tickets as necessary)
- Prep for GSS dissemination committee
- Hold internal data measures workshop
- Organise follow up research sessions with publishers from taxonomy research
- Identify more users with access needs
- Explore access permissions requirements
- Redis is open to entire Internet
- Approach for mapping legacy footnotes in SW3
What we’re planning to do this week
- Explore the design of views with larger / more complex datasets
- Onboard MVP collaborators
- Send invites to taxonomy follow up research with publishers
- Iterate beta product roadmap beyond mvp build
- Configure a suitable testing suite for e2e tests
- Time format - understand our approach [Decision]
- Deploy front-end and back-end applications into automated infrastructure
- Time source
- Content and error message updates to initial create journey screens
- Dimension: Dates reference data branch
- Data architecture internals - online cube model
- Publishing: When should this dataset be published?
- Dimensions: Choose common reference data
- Stand up the service in WG Azure
Goals
These are the goals that we set for this sprint:
- End to end automated testing In progress
- Deploy to WG infrastructure In progress
- Test create journey with Nicole In progress
- Explore data views with larger datasets - predefined views (Venn Diagram) In progress
Risk and Issues
Current table showing project Risks and Issues
Video of the most recent show and tell
Screenshots from candiate design prototypes
Screenshots from working software
Sprint 20-mid U-Statistic
Weekly report
U-Statistic
What we did last week
- Iterate consumer dataset view prototype for GSS event
- Design and agree administration of reference data for MVP and beyond
What we’re planning to do this week
- Metadata: Add a data source from the selected data provider lacks validation
- Configure a suitable testing suite for e2e tests
- Organise follow up research sessions with publishers from taxonomy research
- Identify more users with access needs
- Deploy front-end and back-end applications into automated infrastructure
- Explore access permissions requirements
- Content and error message updates to initial create journey screens
- Dimension: Dates reference data branch
- Data architecture internals - online cube model
- Dimensions: Choose common reference data
- Stand up the service in WG Azure
These are the goals that we set for this sprint
- Understand the scope for remaining items on the roadmap for MVP. In progress
- Identify and document a steel thread for create journey working software In progress
- End to end test environment In progress
- Deploy to WG infrastructure In progress
Screen shot of risks and issues board
Chart showing change in risk profile
Sprint 20 - U-statistic
Weekly report
U-Statistic
What we did last week
- Summarise findings and next steps from taxonomy research study
- Prepare Data Model Solution Design deliverable for submission to the data working group
- Prep for project board 11:30 Monday 11th November
- Create the SW3 map to demonstrate outcomes - first draft
- Ability to edit the data table section after it has been completed
- Links to data table task duplicated
- Create a dataset: Error message
- Metadata: How is this dataset designated?
- Metadata: Will this dataset be regularly updated?
- Metadata: Report links added
- Metadata: Add a link to a report
- Metadata: What is the statistical quality of this dataset?
- Metadata: How was the data collected or calculated?
- Metadata: What is the summary of the dataset?
- Metadata: What is the title of the dataset?
What we’re planning to do this week
- Organise follow up research sessions with publishers from taxonomy research
- Identify more users with access needs
- Deploy front-end and back-end applications into automated infrastructure
- Explore access permissions requirements
- Dimension: Dates reference data branch
- Data architecture internals - online cube model
- Dimensions: Choose common reference data
- Stand up the service in WG Azure
Goals
These are the goals that we set for this sprint:
- Understand the scope for remaining items on the roadmap for MVP. In progress
- Identify and document a steel thread for create journey working software In progress
- End to end test environment In progress
- Deploy to WG infrastructure. (U-statistic) In progress
Risk and Issues
Current table showing project Risks and Issues
Video of the most recent show and tell
Current version of the service transformation map
Sprint 19-mid "Total"
Weekly report
“Total”
What we did last week
- Gather stakeholder feedback and iterate the data model solution document
- Iterate consumer view stimulus for third sector research
- Support for decision around ISO format for time
- Prepare Data Model Solution Design deliverable for submission to the data working group
- Plan engagement with the Third Sector User Panel on 7 Nov
- Create the SW3 map to demonstrate outcomes - first draft
- Missing error message “failed_to_get_preview”
- Explore new OData service
What we’re planning to do this week
- Design and agree administration of reference data for MVP+
- Summarise findings and next steps from taxonomy research study
- Identify more users with access needs
- Exploration of totals and averages
- Deploy front-end and back-end applications into automated infrastructure
- Explore access permissions requirements
- Data architecture internals - online cube model
- Dimensions: Choose common reference data
- Metadata: Sources added
- Metadata: Add a data source from the selected data provider
- Metadata: Add a data provider
- Stand up the service in WG Azure
These are the goals that we set for this sprint
- Agree data document for project board Done
- Do follow-up research with consumers at third-sector event. Done
- Metadata in working software (topics, stretch goal) In progress
Screen shot of risks and issues board
Chart showing change in risk profile
Transformation map showing changes that will be made to the service in Stats Wales 3
The map shows the holistic end-to-end transformation of StatsWales. It highlights painpoints evidenced in the current service and proposed outcomes as a result of processes that will be removed, added or changed.
Sprint 19 - Total
Weekly report
Total
What we did last week
- Test the “find and view” designs with consumers
- Test matching to reference data using SW2 datasets
- Add placeholder text for missing column headers
- Start creating a roadmap for publisher adoption of SW3
- Start to understand cube migration and publisher onboarding
What we’re planning to do this week
- Iterate consumer view stimulus for third sector research
- Summarise findings and next steps from taxonomy research study
- Prepare Data Model Solution Design deliverable for submission to the project board
- Identify more users with access needs
- Exploration of totals and averages
- Plan engagement with the Third Sector User Panel on 7 Nov
- Deploy front-end and back-end applications into automated infrastructure
- Explore access permissions requirements
- Data architecture internals - online cube model
- Dimensions: Choose common reference data
- Stand up the service in WG Azure
Goals
These are the goals that we set for this sprint:
- Agree data document for project board In progress
- Do follow-up research with consumers at third-sector event In progress
Screen shot of risks and issues board
Screenshots of Prototype Demonstrated at the Show and Tell
These are screenshots of working software from the publisher create journey. They will very probably change as part of the incremental development process.
Sprint 18-mid Standard Deviation
Weekly report
Standard Deviation
What we did last week
- Test the find and view designs with consumers
- Create stimulus/prototype for consumer testing - view data
- Test the proposed taxonomy with consumers
What we’re planning to do this week
- Identify more users with access needs
- Exploration of totals and averages
- Test matching to reference data using SW2 datasets
- Deploy front-end and back-end applications into automated infrastructure
- Create the SW3 service map - first draft
- Gather information around Roles Based Access Control
- Ability to edit the data table section after it has been completed
- Start creating a roadmap for publisher adoption of SW3
- Start to understand cube migration
- Dimension: Dates reference data branch
- Data architecture internals - online cube model
- Dimensions: Choose common reference data
- Stand up the service in WG Azure
These are the goals that we set for this sprint
- Ready to test time series dimension with SME In progress
- Conduct research sessions with consumers to test v1 of find and view Done
- Progress our understanding of the cube model In progress
Screen shot of risks and issues board
Sprint 18 - Standard deviation
Weekly report
Standard Deviation
What we did last week
- Summarise findings from testing the ‘update a dataset’ prototype
- Explore how much reference data changes and how much measurement codes change
- Force the user to re-auth if the backend returns 401
- Translate any new content added in Sprint 16
- Define SW3 OKRs and metrics - first iteration
- Test the proposed taxonomy with consumers
- [BUG] Welsh Error code is in English
- [BUG] Menu links are generated incorrectly and URL routing matches over broadly.
- Plan research with consumers
What we’re planning to do this week
- Exploration of totals and averages
- Test matching to reference data using SW2 datasets
- Create stimulus/prototype for consumer testing - view data
- Deploy front-end and back-end applications into automated infrastructure
- Create the SW3 service map - first draft
- Gather information around Roles Based Access Control
- Ability to edit the data table section after it has been completed
- Start creating a roadmap for publisher adoption of SW3
- Start to understand cube migration
- Dimension: Dates reference data branch
- Data architecture internals - online cube model
- Dimensions: Choose common reference data
- Stand up the service in WG Azure
Goals
These are the goals that we set for this sprint:
Ready to test time series dimension with subject matter expert In progress
Conduct research sessions with consumers to test v1 of find and view In progress
Progress our understanding of the cube model In progress
Screen shot of risks and issues board
Sprint 17-mid Regression
Weekly report
Regression
What we did last week
- Flag failing builds - roll back when build fails
- Explore how much reference data changes and how much measurement codes change
- Force the user to re-auth if the backend returns 401
- Stand up all bits of infrastructure from scratch in Terraform for Marvell
- Design proposal - data view
- Remove lang from URL path in the backend and use header
- [BUG] Site goes back to English when homepage is selected.
- [BUG] Website goes back to English when Welsh has been selected.
- [BUG] Language button doen’t work and then inverts itself.
What we’re planning to do this week
- Create stimulus/prototype for consumer testing - view data
- Book sessions with data consumers
- Gather information around Roles Based Access Control
- Ability to edit the data table section after it has been completed
- Start creating a roadmap for publisher adoption of SW3
- Translate any new content added in Sprint 16
- Define SW3 OKRs and metrics
- Dimension: Dates reference data branch
- Test the proposed taxonomy with consumers
- Data architecture internals - online cube model
- Plan research with consumers
- Dimensions: Choose common reference data
- Stand up the service in WG Azure
These are the goals that we set for this sprint
Language switching working as expected (development) In progress
Front end for time reference data (development) In progress
First design for consumer-side table view (design) In progress
Screen shot of risks and issues board
Risk impact score chart
Sprint 17 - Regression
Weekly report
Regression
What we did last week
- Prototype iterations to update data table journey
- Internal show and tell - demo to the team
- Extract data for testing updates
- Get reference data into postgres database
- Test the update journey designs
- MVP consumer site prototype
- [BUG] Login doesn’t protect anything
- [BUG] List data files shows blank screen or error.
- Start interaction with data preparation team - First artefact
- Identify candidate area that could be prototyped and conducted in Welsh.
What we’re planning to do this week
- Ability to edit the data table section after it has been completed
- Decide approach for 3 metadata fields currently not in design
- Dimension: Load reference data
- Stand up all bits of infrastructure from scratch in Terraform for Marvell
- Design proposal - data view
- Define SW3 OKRs and metrics
- Test the proposed taxonomy with consumers
- [BUG] Welsh Error code is in English
- [BUG] Site goes back to English when homepage is selected.
- [BUG] Website goes back to English when Welsh has been selected.
- Data architecture internals - online cube model
- Plan the next round of consumer research
- Stand up the service in WG Azure
Goals
These are the goals that we set for this sprint:
Language switching working as expected (development) In progress
Front end for time reference data (development) In progress
First design for consumer-side table view (design) In progress
Screen shot of risks and issues board
Sprint 16 - Quartile
Weekly report
Quartile
What we did last week
- TRAN0140 update data table prototype
- HLTH3020 update data table prototype
- Test the guidance for publishers
- “Enable a working environment
- Run the consumer taxonomy study
- [SPIKE] OneLogin prototype
- Discussion - what needs to happen next to progress data migration?
- Stand up a blob storage and a redis instance
What we’re planning to do this week
- Design exploration - upload your own lookup table
- Extract tran0140 for testing updates
- Get reference data into postgres database
- Test the proposed taxonomy with consumers
- Test the update journey designs
- MVP consumer site prototype
- Plan the next round of consumer research
- Identify candidate area that could be prototyped and conducted in Welsh.
- Data table: Column labelling
- Stand up the service in WG Azure
- Implement Task List Create a dataset
These are the goals that we set for this sprint:
- Create journey up to upload data table - saying what the data table contains (SME to upload flying start csv data) In progress
Screen shot of risks and issues board
Sprint 15 p-value
Weekly report
p-value
What we did last week
- First data prototype for publisher update journey research
- “Consumer experience workshops: Challenges
- Provide sample data from SW2 data cube to help with SW3 format
- [SPIKE] Data access strategy
What we’re planning to do this week
- Get reference data into postgres database
- Test the proposed taxonomy with consumers
- Test the update journey designs
- Test the guidance for publishers
- Run the consumer taxonomy study
- Plan the next round of consumer research
- Discussion - what needs to happen next to progress data migration?
- Data table: Column labelling
- Stand up the service in WG Azure
These are the goals that we set for this sprint:
Complete “publish a dataset” up to metadata (development) In progress In progress
Azure pipeline in WG estate (dev ops) In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
Access to the source data There is a danger that the data that we need to access may not be available all the time
Azure pipeline We do not yet have a full pipeline from Marvell Azure to Welsh Government Azure
Screen shot of risks and issues board
Sprint 15 p-value
Weekly report
p-value
What we did last week
- Start to think about publishing approval process - MVP
- Follow up on feedback so far from taxonomy study
What we’re planning to do this week
- Mocked data prototypes for publisher update journey research
- Consumer experience workshops: challenges, ideas and planning
- Run the consumer taxonomy study
- Plan the next round of consumer research
- [SPIKE] OneLogin prototype
- Provide sample data from SW2 data cube to help with SW3 format
- Move backend over to using Node Streaming API
- Discussion - what needs to happen next to progress data migration?
- Data table: Column labelling
- Stand up the service in WG Azure
- [SPIKE] Data access strategy
These are the goals that we set for this sprint:
Complete “publish a dataset” up to metadata (development) In progress In progress
Azure pipeline in WG estate (dev ops) In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
Access to the source data We think we have the access we need
Azure pipeline We do not yet have a full pipeline from Marvell Azure to Welsh Government Azure
Screen shot of risks and issues board
Sprint 16-mid Quartile
Weekly report
Quartile
What we did last week
- Share guidance with SME in preparation for testing in working software
- Design exploration - upload your own lookup table
- Update data table prototype with real data
- Test data preparation guidance with publishers
- Run the consumer taxonomy study
- Follow up on feedback so far from taxonomy study
- Consume the domain model on the front end
- Data table: Column labelling
- Update content on ‘Create a new dataset’ to reflect current approach
- Replacing ‘Name the dataset’ with ‘What is the title of the dataset’
- Implement domain model on back-end
- Implement Task List Create a dataset
- Implement Successful upload confirmation (data table preview)
What we’re planning to do this week
- Dimension: Load reference data
- Prototype iterations to update data table journey
- Stand up all bits of infrastructure from scratch in Terraform for Marvell
- Define SW3 OKRs and metrics
- Remove lang from URL path in the backend and use header
- Extract tran0140 for testing updates
- Get reference data into postgres database
- Test the proposed taxonomy with consumers
- Test the update journey designs
- MVP consumer site prototype
- Plan the next round of consumer research
- Identify candidate area that could be prototyped and conducted in Welsh.
- Stand up the service in WG Azure
These are the goals that we set for this sprint
- Create journey up to upload data table - saying what the data table contains (SME to upload flying start csv data)
In progress
Screen shot of risks and issues board
Sprint 14 (mid sprint) Outlier
Weekly report
Outlier
What we did last week
- Report on OData
- Conduct and analyse tree test of topics taxonomy (with publishers)
What we’re planning to do this week
- OData loader
- Conduct and analyse tree test of topics taxonomy (with publishers)
- KAS Analysis of data migration exercise (questionnaire)
- Discussion - what needs to happen next to progress data migration?
- Planning next round of research with publishers.
- Stand up the service in WG Azure
- [SPIKE] Data access strategy
- Implement domain model on back-end
- Data domain model
- Consumer experience workshops: Challenges and idea generation
Sprint goals
These are the goals that we set for this sprint:
Complete “publish a dataset” up to metadata (development) In progress
Azure pipeline in WG estate (dev ops) In progress
Get full download of most recent OData (data) In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
- Access to the source data We are waiting for access to a server which we think might help with data access and migration
- Azure subscriptions We have requested further permissions which are required to build the pipeline from the Marvell estate to the WG estate
Screen shot of risks and issues board
Risk impact chart
Sprint 14 Outlier
Weekly report
Outlier
What we did last week
- OData loader
- KAS Analysis of data migration exercise (questionnaire)
- Prototype update journey
- Planning next round of research with publishers.
- Make publishers aware when a cube is first created - approach!
- Implement hierarchies - alongside geography table
- Happy path user flow for updating a dataset
What we’re planning to do this week
- [SPIKE] OneLogin prototype
- Report on OData
- Move backend over to using Node Streaming API
- Conduct and analyse tree test of topics taxonomy (with publishers)
- Discussion - what needs to happen next to progress data migration?
- Stand up the service in WG Azure
- [SPIKE] Data access strategy
- Implement domain model on back-end
- Data domain model
These are the goals that we set for this sprint:
Complete “publish a dataset” up to metadata In progress
Understand which data will be migrated into the new service and how In progress
Azure pipeline in WG estate In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
Access to the source data We are seeking access to a data source which we think will help with progress
Azure pipeline We do not yet have a full pipeline from Marvell Azure to Welsh Government Azure
Screen shot of risks and issues board
Sprint 12 (mid sprint) mean
Weekly report
Longtitudinal study
What we did last week
- Analyse findings from workshops with publishers & agree next steps
- Hold review workshops with publishers to test options for making updates to a dataset
- Try to construct a csv from changing OData
- Register Dynamics - Cyber Essentials Plus
- Implement auth into relevant services
What we’re planning to do this week
- Stand up the service in WG Azure
- Plan and conduct tree test of topics taxonomy (publishers and data consumers)
- [SPIKE] Data access strategy [On Hold 11/07/2024]
- Data domain model
- Data audit for data migration
- Implement Successful upload confirmation (data table preview)
Sprint goals
These are the goals that we set for this sprint:
Complete “publish a dataset” up to metadata (development) In progress
Converge on approach for the update journey (research and design) In progress
Azure pipeline in WG estate (development) In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
- Access to the source data A data archive file was provided, but did not contain the data that was anticipated, we are continuing discussions to obtain the correct file
- Azure subscriptions We have subscription. We will have the pipeline very soon
Screen shot of risks and issues board
Risk impact chart
Sprint 13 (mid sprint) The Letter N
Weekly report
The Letter n
What we did last week
- Update Solution design document
- Plan for project board
- [TASK] Investigate missing values in SW2
What we’re planning to do this week
- OData loader
- Conduct and analyse tree test of topics taxonomy (with publishers)
- KAS Analysis of data migration exercise (questionnaire)
- Prototype update journey
- Discussion - what needs to happen next to progress data migration?
- Planning next round of research with publishers.
- Stand up the service in WG Azure
- Happy path user flow for updating a dataset
- [SPIKE] Data access strategy
- Implement domain model on back-end
- Data domain model
Sprint goals
These are the goals that we set for this sprint:
Complete “publish a dataset” up to metadata (development) In progress
Azure pipeline in WG estate (dev ops) In progress
Get full download of most recent OData (data) In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
- Access to the source data We have received another data file today, which we are hoping will contain everything that we need
- Azure subscriptions We have requested further permissions which are required to build the pipeline from the Marvell estate to the WG estate
Screen shot of risks and issues board
Risk impact chart
Sprint 12 Mean
Weekly report
Mean Average
What we did last week
- Create stimulus for update journey research
- Review the spreadsheet coming out of Bilingual toolkit discussion
- Plan research to review options for the update user journey
- Proposed approach for footnotes including missing values
- Implement ‘Create a new dataset’ page
What we’re planning to do this week
- Hold review workshops with publishers to test options for making updates to a dataset
- Implement hierarchies - alongside geography table
- Stand up the service in WG Azure
- Happy path user flow for updating a dataset
- [SPIKE] Data access strategy [On Hold 11/07/2024]
- Data domain model
- Data audit for data migration
- Register Dynamics - Cyber Essentials Plus
- Implement Successful upload confirmation (data table preview)
These are the goals that we set for this sprint:
Prepare to test candidate options for update dataset user journey (research and design) In progress
Converge on approach for the update journey In progress
Azure pipeline in WG estate In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
Access to the source data Data sharing agreement is shared and back with the Welsh Gov to finally provide us with the data
Agreement on access for Marvell subcontractors (Cyber Essentials Plus) This is no longer a blocker, our subcontractrs have the relevant certification
Azure subscriptions Marvell working to establish pipelines in Azure in order to begin migration of the service into the WG estate.
Screen shot of risks and issues board
Sprint 13 The letter n
Weekly report
The letter n
What we did last week
- Plan and conduct tree test of topics taxonomy (publishers and data consumers)
- Data audit for data migration
- Held review workshops with publishers to test options for making updates to a dataset
- Proposed iterations to update journey approach informed by workshops with publishers
- Planning for taxonomy tree test
What we’re planning to do this week
- Prototype update journey
- Discussion - what needs to happen next to progress data migration?
- Planning next round of research with publishers.
- Stand up the service in WG Azure
- Happy path user flow for updating a dataset
- [SPIKE] Data access strategy
- Implement domain model on back-end
- Data domain model
These are the goals that we set for this sprint:
Complete “publish a dataset” up to metadata In progress
Get full download of most recent OData In progress
Azure pipeline in WG estate In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
Access to the source data Progress is being made on understanding which data needs to be migrated and which data can be recreated in the new system
Azure pipeline Work is continuing to build the development pipeline
Screen shot of risks and issues board
Sprint 11 (mid sprint) longtitudinal study
Weekly report
Longtitudinal study
What we did last week
- Create stimulus for update journey research
- Proposed approach for footnotes including missing values
What we’re planning to do this week
- Implement hierarchies - alongside geography table
- Stand up the service in WG Azure
- Review the spreadsheet coming out of Bilingual toolkit discussion
- Happy path user flow for updating a dataset
- [SPIKE] Data access strategy [On Hold 11/07/2024]
- Plan the next round of research
- Data domain model
- Data audit for data migration
- Register Dynamics - Cyber Essentials Plus
- Implement Successful upload confirmation (data table preview)
These are the goals that we set for this sprint:
Prepare to test candidate options for update dataset user journey (research and design) In progress
Complete publish a dataset up to task list with stories up to metadata (development) In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
Access to the source data We are waiting to take a cut of the data that we should be able to use to inform further work, especially migration
Agreement on access for Marvell subcontractors (Cyber Essentials Plus) CE+ audit booked by RD for Friday 26th July
Azure subscriptions We now have the access we need to the WG Azure estate
Screen shot of risks and issues board
Risk impact chart
Sprint 11 longtitudinal study
Weekly report
Longtitudinal study
What we did last week
- Formalise options for updating data table (append or replace etc)
- Identify and interrogate list of update scenarios
What we’re planning to do this week
- Create stimulus for update journey research
- Stand up the service in WG Azure
- [SPIKE] Explore react front-end for the app
- Happy path user flow for updating a dataset
- [SPIKE] Data access strategy [On Hold 11/07/2024]
- Plan the next round of research
- Data domain model
- Data audit for data migration
- Register Dynamics - Cyber Essentials Plus
- Proposed approach for footnotes including missing values
- Implement Successful upload confirmation (data table preview)
These are the goals that we set for this sprint:
Prepare to test candidate options for update dataset user journey (research and design) In progress
Complete publish a dataset up to task list with stories up to metadata (development) In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
Access to the source data We are waiting to take a cut of the data that we should be able to use to inform further work, especially migration
Agreement on access for Marvell subcontractors (Cyber Essentials Plus) This is currently being pursued by Marvell as a priority
Azure subscriptions Marvell working to establish pipelines in Azure in order to begin migration of the service into the WG estate.
Screen shot of risks and issues board
Sprint 10 (Mid) Kruskall-Wallace test
Weekly report
Kruskall-Wallace test
What we did last week
- Hold workshop to review toolkit for bilingual software and develop action plan
- Review high-level plan for research in Beta
What we’re planning to do this week
- [SPIKE] Explore react front-end for the app
- Formalise options for updating data table (append or replace etc)
- Happy path user flow for updating a dataset
- Identify and interrogate list of update scenarios
- [SPIKE] Data access strategy
- Recommendations and next steps from round 3 testing
- Guidance to publishers
- Data domain model
- Data audit for data migration
- Register Dynamics - Cyber Essentials Plus
- Proposed approach for footnotes including missing values
- Implement auth into relevant services
Goals
These are the goals that we set for this sprint:
Authorisation for the service (development) In progress
Agree TO BE update dataset user journey - minimum viable product (research and design) In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
Access to the source data *** We have access to the data on a virtual machine and are still working to get a full, editable cut of the data which we require for migration***
Agreement on access for Marvell subcontractors Register Dynamics are currently in discussion with the auditors to gain certification
Architecture form - Azure subscriptions We do not yet have permission to access the Welsh government Azure estate
Screen shot of risks and issues board
Sprint 10 Kruskall-Wallace test
Weekly report
Kruskall-Wallace test
What we’re planning to do this week
- Formalise options for updating data table (append or replace etc)
- Happy path user flow for updating a dataset
- Identify and interrogate list of update scenarios
- [SPIKE] Data access strategy
- Recommendations and next steps from round 3 testing
- Data domain model
- Data audit for data migration
- Register Dynamics - Cyber Essentials Plus
- Hold workshop to review toolkit for bilingual software and develop action plana
- Proposed approach for footnotes including missing values
- Implement auth into relevant services Goals
These are the goals that we set for this sprint:
Authorisation for the service (development) In progress
Agree TO BE update dataset user journey - minimum viable product (research and design) In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
Access to the source data We now have access to the data on a machine which is available to the team. Ultimately we will still need download access to the data
Agreement on access for Marvell subcontractors (Cyber Essentials Plus) certification is expected by the end of the week - the audit is booked
Azure subscriptions Azure is still TBC - our SDD was updated to reflect a few more questions
Screen shot of risks and issues board
Sprint 9 Mid - Joint Probability Distribution
Weekly report
What we did last week
- Conduct data consumer survey analysis
- Implement ‘Name the dataset’
- Implement ‘Choose a dataset to upload’
What we’re planning to do this week
- [SPIKE] Data access strategy
- Recommendations and next steps from round 3 testing
- Analyse findings from ‘Browse by topic’ exercise with data consumers
- Data domain model
- Data audit for data migration
- Register Dynamics - Cyber Essentials Plus
- Hold workshop to review toolkit for bilingual software and develop action plan
- [SPIKE] Gel React components
- High-level plan for user research in Beta
- Investigate missing values in SW2 - and map out TO - BE (Data & Design)
- Implement auth into relevant services
Goals
These are the goals that we set for this sprint:
Authorisation for the service (development) In progress
Settle on a design for footnotes, notes, missing values, provisional data (research and design) In progress
Start to understand the update journey (research and design) In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
- Access to the source data
We are exploring another approach to getting a copy of the data for which we hope to get approval
- Agreement on access for Marvell subcontractors
We have booked a Cyber Essentials Plus Audit
- Architecture form - Azure subscriptions
We are waiting for the go ahead for this after discussions around the Service Desciption Document
Screen shot of risks and issues board
Sprint 9 - Joint probability distribution
Weekly report
Joint probability distribution
What we’re planning to do this week
- Analyse findings from ‘Browse by topic’ exercise with data consumers
- Data domain model
- Data audit for data migration
- Register Dynamics - Cyber Essentials Plus
- Investigate missing values in SW2 - and map out TO - BE (Data & Design) Consumers
- Analyse Stats Wales 2 data [On hold 06/06/2024]
- [SPIKE] Gel React components
- High-level plan for user research in Beta
- Investigate missing values in SW2 - and map out TO - BE (Data & Design)
- Implement auth into relevant services
Goals
These are the goals that we set for this sprint:
Authorisation for the service (development) In progress
Settle on a design for footnotes, notes, missing values, provisional data (research and design) In progress
Start to understand the update journey (research and design) In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
Access to the source data Access to source data - TBC
Agreement on access for Marvell subcontractors (Cyber Essentials Plus) certification is expected by the end of the week - the audit is booked
Azure subscriptions Azure is still TBC - our SDD was updated to reflect a few more questions
Screen shot of risks and issues board
Screen shot of prototype tested with users
Task list for data publishers
![prototype1_20240617.png]
Sprint 8 Mid - Intersecting Iguanas
Weekly report
What we did last week
- Start to understand the needs being met by the current OData service
- Connect prototype designs with stories
- Use SW2 data to build prototype data for testing new system
- Respond to Jack (CDPS) regarding data standards
- Plan engagement for Welsh Statistical Liaison Committee event (5th June)
- Welsh statistical liaison committee meeting (WSLC)
- Plan for project board - Monday 10th
- Adjust the build pipeline
What we’re planning to do this week
- Implement ‘Choose a dataset to upload
- Understand the variety of data sources
- Finalise Solution Design Document
- Implement ‘Create a new dataset’ page
- Align roadmap and release plan
- Understand and reconcile geographical coverage options
- Implement auth into relevant services
- Model a mock data cube - Architecture
- High-level plan for user research in Beta
- [SPIKE] Gel React components
- Implement ‘Name the dataset’
- Analyse Stats Wales 2 data
- Register Dynamics - Cyber Essentials Plus
- Testing prototype round 3 - with publishers
- Data audit for data migration
- Data domain model
- Analyse findings from ‘Browse by topic’ exercise with data consumers
Goals
These are the goals that we set for this sprint:
Complete testing of prototype round 3 with publishers In progress
Implement Auth on the beta application In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
- Access to the source data
We are exploring another approach to getting a copy of the data for which we hope to get approval
- Agreement on access for Marvell subcontractors
We are reviewing with our subcontractors the oustanding points that resulted from the Cyber Essentials Plus audit - Architecture form - Azure subscriptions
We have a meeting with the architect’s office to review the current Service Description Document
Screen shot of risks and issues board
Sprint 8 - Intersecting Iguanas
Weekly report
What we did last week
- Get access to WG estate
- First iteration of TO-BE user flow for create dataset
- Prepare prototype V3 for testing with publishers
What we’re planning to do this week
- Implement ‘Choose a dataset to upload’
- Finalise Solution Design Document
- Start to understand the needs being met by the current OData service
- Implement ‘Create a new dataset’ page
- Align roadmap and release plan
- Model a mock data cube - Architecture Data & Design
- High-level plan for user research in Beta
- Implement ‘Name the dataset’
- Analyse Stats Wales 2 data
- Plan engagement for Welsh Statistical Liaison Committee event (5th June)
- Register Dynamics - Cyber Essentials Plus
- Testing prototype round 3 - with publishers
Goals
These are the goals that we set for this sprint:
- Complete testing of prototype round 3 with publishers
In progress
- Implement Auth on the beta application
In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
- Access to the source data
We are exploring another approach to getting a copy of the data for which we hope to get approval today
- Agreement on access for Marvell subcontractors
Our subcontractors are in the process of getting the required certification
- Architecture form - Azure subscriptions
We are still responding to comments on the service design documents, we do not currently have access
Screen shot of risks and issues board
Sprint 7 - Helpful Histogram (mid-sprint)
What we did last week
- Access full data cubes for StatsWales 2
- Create SOW and Milestones document for Private Beta
- Decide scope for the iteration 3 with publishers
- Conduct user research analysis of data processors / publishers - Round 2
- Prep for project board - highlight report
- Create first iteration of TO-BE journey map for publisher
- Iterate data publishing content journey - prototype version 3
- User research housekeeping of participant lists for Private Beta
- Transfer materials from Objective Connect to our shared drive
- Plan third round of testing with publishers
- Exploration session into notes
What we’re planning to do this week
- Implement ‘Choose a dataset to upload’
- Understand the variety of data sources
- Get access to WG estate
- Finalise Solution Design Document
- Understand the needs being met by the current OData service
- Implement ‘Create a new dataset’ page
- First iteration of TO-BE user flow for create dataset
- Align roadmap and release plan
- [SPIKE] Explore if there is more than one data field in a StatsWales 2 cube
- Understand and reconcile geographical coverage options
- Implement ‘Name the dataset’
- Analyse Stats Wales 2 data
- Prepare prototype V3 for testing with publishers
Goals
These are the goals that we agreed in planning:
- Refine the Beta backlog In progress In progress
- Start migration to Welsh Gov Azure In progress In progress
- Plan next round of research with data publishers In progress
Things to bear in mind / What’s blocking us
Things to bear in mind / What’s blocking us The following things are still blocking us, although progress is being made:
- Agreement on access for Marvell subcontractors Register Dynamics are going the process of being audited for Cyber Essentials Plus
- Architecture form - this has been discussed and issues that were raised are being addressed, we still do not yet have access to the subscriptions that we need
- We now have access to SW2 data cubes.
Screen shot of risks and issues board
Sprint 7 - Helpful Histogram
What we’re planning to do this week
- Access full data cubes for StatsWales 2 [ON HOLD 18/04/2024]
- Understand the variety of data sources
- Create SOW and Milestones document for Private Beta
- Finalise Solution Design Document
- Implement ‘Create a new dataset’ page
- Create first iteration of TO-BE journey map
- Iterate data publishing content journey - prototype version 3
- User research housekeeping of participant lists for Private Beta
- Align roadmap and release plan
- Explore if there is more than one data field in a StatsWales 2 cube
- Understand and reconcile geographical coverage options
Goals
These are the goals that we set for this sprint:
- Refine the Beta backlog In progress
- Start migration to Welsh Gov Azure In progress
- Plan next round of research with data publishers In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
- Access to the source data - We hope to get access to the data this afternoon (21/05/2024)
- Agreement on access for Marvell subcontractors - Our subcontractors are in the process of getting the required certification
- Architecture form - Azure subscriptions - We are still responding to comments on the service design documents, we do not currently have access, we are unsure if this is a dependency for access
Screen shot of risks and issues board
Sprint 6 - Giddy Gaussian (mid-sprint)
What we did last week
- Update user needs board based on research findings
- Break apart the monolith into smaller services
- Analyse finds from current round of user reearch
- Decide what we’re going to test next - identify iteration 3
- Alpha assessment
- Data publisher round 2 testing - research analysis prep & workshop
- Conduct user research analysis of data processors - Round 2
- Onboarding new designer
What we’re planning to do this week
- Access for full data cubes for Statswales2 [ON HOLD 18/04/2024]
- Understand the variety of data sources
- SOW and Milestones - documentation required in support of Beta
- Solution Design Document
- Attempt to implement OData api on the backend app
- Understanding the domain process Simon (GEL)
- Iterate data publishing content journey (prototype) - Version 3
- Aligning roadmap and release plan
Goals
These are the goals that we agreed in planning:
- Refine the Beta backlog In progress In progress
- Start migration to Welsh Gov Azure In progress In progress
- Plan next round of research with data publishers In progress
Things to bear in mind / What’s blocking us
- Today it was agreed in project board that the project had passed its Alpha assessment
The following things are still blocking us, although progress is being made:
- Access to the data - we have now received invitations for accounts to access the data
- Agreement on access for Marvell subcontractors Register Dynamics are going the process of being audited for Cyber Essentials Plus
- Architecture form - this has been discussed and issues that were raised are being addressed, we still do not yet have access to the subscriptions that we need
Screen shot of risks and issues board
Sprint 6 - Giddy Gaussian
What we did last week
- Investigating methods to help data publishers clean and validate their datasets at upload time to increase the value of the platform as a whole - dependency on data cubes
- Data publisher round 2 testing - research analysis prep & workshop
- In the run-up to the beta phase, we will begin horizon scanning to identify constraints to release for the beta phase such as security checkpoints, approval processes, interfacing with other services/teams, etc.
- Content review of current prototypes
- Access for full data cubes for Statswales2
What we’re planning to do this week
- SOW and Milestones - documentation required in support of Beta
- Update user needs board based on research findings
- Create to-be service map first iteration
- Iterate data publishing user journey (design)
- Complete the Solution Design Document Form
Goals
These are the goals that we set for this sprint:
- Refine the Beta backlog In progress
- Start migration to Welsh Gov Azure In progress
- Plan next round of research with data publishers In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
- Access to the data We downloaded all the StatsWales2 OData (1377 datacubes) which we are currently analysing while we await access to the source data.
- Access to the source data - we are currently waiting for security re-clearance in order to get access to the data
- Agreement on access for Marvell subcontractors - Our subcontractors are in the process of getting the required certification
- Architecture form - Azure subscriptions - we are responding to comments on our service design document, but we don’t anticipate that this will now stop us getting access
Screen shot of risks and issues board
Sprint 5 - Fabulous F-Value (mid-sprint)
What we did last week
- Create prototype (iteration 2) for data processor/ publisher
- Hold second round of prototype testing sessions
- Reference data - controlled vocabularies such as local authority codes
- In the run-up to the beta phase, we will begin horizon scanning
What we’re planning to do this week
- Access for full data cubes for Statswales2 [ON HOLD 18/04/2024]
- Recommending our approach for Beta / Alpha report
- SOW and Milestones - documentation required in support of Beta
- Data consumer - Prototype (iteration 2)
- Alpha assessment
- Data publisher round 2 testing - research analysis prep & workshop
Goals
These are the goals that we agreed in planning:
- Complete the data processor user testing round 2 Done
- Complete Alpha / Beta report (and SOW Beta) In progress
- Break the alpha service into components Done
Things to bear in mind / What’s blocking us
- We conducted the first half of the Alpha assessment another session is scheduled for Friday
The following things are still blocking us, although progress is being made:
- Access to the data discussions are happening today to get access to the data
- Agreement on access for Marvell subcontractors Register Dynamics are going the process of being audited for Cyber Essentials Plus
- Architecture form - Azure subscriptions - a draft of this form will be shared with WG technical staff today
Screen shot of risks and issues board
Sprint 5 - Fabulous F-Value
What we did last week
- Create metadata database and link to assets in datalake (Goal)
- Implementation of Internationalisation
- Create summary document of 1st round testing
- Advertise user consumer survey more widely
What we’re planning to do this week
- Start to implement GEL into the service
- Hold second round of prototype testing sessions - data processor / publisher)
- Common data types and their encodings for example, geo-spatial data, cost/price data or population measures - beta?
- Hold second round of prototype testing with data consumers
- Break apart the monolith into smaller services
- Authentication around the service
Goals
These are the goals that we set for this sprint:
- Complete the data processor user testing round 2 In progress
- Complete Alpha / Beta report (and SOW Beta) In progress
- Break the alpha service into components In progress
Things to bear in mind / What’s blocking us
The following things are still blocking the progress of the project
- Access to the data
- Agreement on access for Marvell subcontractors
- Architecture From - Azure subscriptions
Screen shot of risks and issues board
Sprint 4 - Educated Estimation (mid-sprint)
What we did last week
- Prepare new data consumer participant list
- Organise 5 SW participants for next round of testing
- SPIKE: Presenting data ingestion, data format options for next round of testing
What we’re planning to do this week
- Access for full data cubes for Statswales2
- Create metadata database and link to assets in datalake (Goal)
- Recommending our approach for Beta / Alpha report
- Reference data - controlled vocabularies such as local authority codes
- SOW and Milestones - documentation required in support of Beta
- Data processor/publisher - Prototype (iteration 2)
- Data consumer - Prototype (iteration 2)
- Implementation of Internationalisation
- Create summary document of 1st round testing
- Advertise user consumer survey wider
Goals
These are the goals that we agreed in planning:
- Analyse the findings from data processors and data consumers (research and design) In progress
- Create metadata database and link to assets in data lake (data and development) In progress
Things to bear in mind / What’s blocking us
These are the things are limiting our progress and are identified as large risks
- Access to Stats Wales Data cubes
Screen shot of risks and issues board
Sprint 4 - Educated Estimate
What we did last week
- Spike - explore Reference data and linking - data modelling and standardisation:
What we’re planning to do this week
- Create metadata database and link to assets in datalake
- Reference data - controlled vocabularies such as local authority codes
- Prepare new data consumer participant list
- Implementation of Internationalisation
- Create summary document of 1st round testing
Goals
- Iterate data processor’s user journeys (User Research and Design)In progress
- Create metadata database and link to assets in data lake (data and development) In progress
- Internationalisation (data and development) In progress
Things to bear in mind / What’s blocking us
We are still being blocked by these two issues - although we hope to make some progress today
- Cyber essentials plus certification
- Assess to Stats Wales Data cubes
Screen shot of risks and issues board
Sprint 3 - Descriptive Statistics (mid-sprint)
What we did last week
- Recruit participants for testing the alpha prototype
- Data consumer - research analysis
- Conduct remote usability testing of our prototypes - data consumers
- Data processor/publisher - Prototype (iteration 1)
- Data consumer - Prototype (iteration 1)
What we’re planning to do this week
- Automate infrastructure as code for publishing app
- Analyse survey data from data processors
- Access for full data cubes for Statswales2 [ON HOLD 28/03/2024]
- Create metadata database and link to assets in datalake
- Identify what changes and improvements we might want to make in subsequent iterations
- Understand the variety of data sources [DEPENDENCY ON CYBER ESSENTIALS]
- Data processor / publisher - research analysis
Goals
These are the goals that we agreed in planning:
- Analyse the findings from data processors and data consumers (research and design) In progress
- Create metadata database and link to assets in data lake (data and development) In progress
Things to bear in mind / What’s blocking us
Currently three things are limiting our progress and are identified as large risks
- Cyber essentials plus certification
- Assess to Stats Wales Data cubes
- SC Clearance
Screen shot of risks and issues board
Sprint 3 - Descriptive Statistics
What we did last week
- Disseminate the survey to the data publishers
- Define concepts for prototype testing
- Recruit participants for testing the alpha prototype
- Session to look at current product analytics
- Upload data to a data lake (includes Demo)
- Start conducting remote usability testing of our prototypes - data processors
What we’re planning to do this week
- Automate infrastructure as code for publishing app
- Analyse survey data from data processors
- Develop prototypes
- Access for full data cubes for Statswales2
- Start conducting remote usability testing of our prototypes - data consumers
Goals
These are the goals that we agreed in planning:
- Analyse the findings from data processors and data consumers (research and design) In progress
- Create metadata database and link to assets in data lake (data and development) In progress
Things to bear in mind / What’s blocking us
Currently two things are limiting our progress and are identified as large risks
- Cyber essentials plus certification
- Assess to Stats Wales Data cubes
Screen shot of risks and issues board
Sprint 2 - Curious Correlation (mid Sprint)
What we did last week
- Disseminate the survey to the data publishers.
- Implement the data consumer bilingual survey using Smart Survey
- Plan session to look at current product analytics
- Terms of References
- Define concepts for prototype testing
What we’re planning to do this week
- Agree approach for displaying data for Alpha
- Conduct remote usability testing data processors / publishers
- Implement privacy notice statement in HTML on the SW site
- Automate infrastructure as code for publishing app
- Look at the data quality ticket in the delivery board - split into tickets and identify first step
- Plan testing of the alpha prototype
- Develop prototypes
- Recruit participants for testing the alpha prototype
- Access for full data cubes for Statswales2
- Publish data consumer survey to the SW site
- Analyse survey data from data processors
Goals
These are our goals for this sprint
- Create metadata database and link to assets in datalake In progress
- Design and test prototypes with data processors/publishers In progress
- Data: start to structure/analyse data in Azure once there is some in there In progress
- Upload data to a data lake (includes Demo) Done
Things to bear in mind
- Two members of our team have now managed to transfer their SC clearance
Screen shot of risks and issues board
Chart showing risk impact
Screenshot of delivery plan
Sprint 2 - Curious Correlation
What we did last week
- Create a plan for prototype testing
- Pilot data processor feedback survey with two participants
- IaC a data lake
- Points / Milestones against which project will be tracked / paid
- Mapping data on the current service
- Examination of lookup tables and identification of those that overlap
- Design exploration - what first after user story review?
- Get data consumer survey translated into Welsh
- Build backlog of user stories in Azure
- Walk through of the Stats Wales Back office
What we’re planning to do this week
- Plan testing of our prototype with users
- Automate infrastructure as code for publishing app
- Terms of References
- Disseminate the survey to the data publishers
- Agree approach for displaying data for Alpha
- Analyse survey data from data processors
- Develop prototypes
- Access for full data cubes for Statswales2
- Define concepts for prototype testing
- Recruit participants for testing the alpha prototype
- Session to look at current product analytics
- Look at the data quality ticket in the delivery board - split into tickets and identify first step
Goals
These are our goals for this sprint
- Create metadata database and link to assets in datalake In progress
- Data: start to structure/analyse data in Azure once there is some in there In progress
- Design and test prototypes with data consumers In progress
- Upload data to a data lake (includes Demo) In progress
- Agree the scope for Alpha Done
- Document high-level architecture and Azure products Done
Things to bear in mind
- Transfer of SC clearance is still currently a limiting factor on progress
Screen shot of risks and issues board
Screenshot of delivery plan
Sprint 1 - Buoyant Boolean (mid Sprint)
What we did last week
- Conduct an analysis of potential platforms (e.g., within MS Azure)
- Is this service applicable for a service assessment?
- Initial / High level stakeholder map
- Set up a separate Azure environment for Marvell.
- Conduct a data architecture review
- Explore data storage and analysis options within Azure
What we’re planning to do this week
- Build backlog of user stories in Azure
- Create a plan for prototype testing
- Design exploration - what first after user story review?
- Disseminate the survey to the data publishers.
- Examination of lookup tables and identification of those that overlap
- Get data consumer survey translated into Welsh
- IaC a data lake
- Mapping data on the current service
- Pilot data processor feedback survey with two participants
- Plan testing of the alpha prototype
- Recruit participants for testing the alpha prototype
- Request access to disaster recovery for Statswales2
- automate infrastructure as code for publishing app
Goals
These are our goals for this sprint
- Agree the scope for Alpha In progress
- Document high level architecture and Azure products In progress
- Data: start to structure/analyse data in Azure once there is some in there In progress
- Have a service in Azure that we can demo Done
- Understand the existing data process better Done
Things to bear in mind
- Transfer of SC clearance is still currently a limiting factor on progress
Screen shot of risks and issues board
Chart showing risk impact
Screenshot of delivery plan
Sprint 1 - Buoyant Boolean
What we did last week
- Conduct an analysis of potential platforms (e.g., within MS Azure)
- Is this service applicable for a service assessment?
- Initial / High level stakeholder map
- Set up a separate Azure environment for Marvell.
- Conduct a data architecture review
- Explore data storage and analysis options within Azure
What we’re planning to do this week
- 3 surveys - data processor, data publisher, data consumer
- Create a plan for prototype testing
- User story map
- Build backlog of user stories in Azure
- Design exploration - what first after user story review?
- Have a walkthrough of the existing data process
- Disseminate the survey to the data publishers and data processors.
- Document and share initial architecture
- Request access to disaster recovery for Statswales2
Goals
These are our goals for this sprint
- Understand the existing data process better In progress
- Agree the scope for Alpha In progress
- Document high level architecture and Azure products In progress
- Data: start to structure/analyse data in Azure once there is some in there In progress
- Have a service in Azure that we can demo Done
Things to bear in mind
- Transfer of SC clearance is currently a limiting factor on progress
Screen shot of risks and issues board
Screenshot of delivery plan
Sprint 0 - Arbitrary Asymptote
What we did last week
- Initial architecture discussion
- Ways of working meeting
- Project board set up
What we’re planning to do this week
- Conduct a data architecture review
- Set up a separate Azure environment for Marvell
- Initial / High level stakeholder map
Goals
These are our goals for this sprint
- Have a service in Azure that we can demo In progress
- Understand the existing data process better In progress
- Agree the scope for Alpah In progress
- Document high level architecture and Azure products In progress
- Create an initial research plan Done
- Review discovert outputs Done
Things to bear in mind
- The project started Tuesday 6th February
- This is our first weekly report