Data Integration and Architecture

FELD M supports companies not only in breaking through data silos (online and offline) on the strategic and conceptual level, but also in implementing solutions that permanently merge silos.

We know exactly which challenges have to be overcome when integrating and storing large, heterogeneous amounts of data in order to provide highly performing homogeneous data results. Our experienced experts use innovative technologies to develop robust solutions.

Our internal Development Team sees itself as an interface between data science, visualisation and analytics. The range of tasks extends from addressing different interfaces (e.g. APIs from Adobe or Google Analytics, Facebook or different CRM systems) to linking and integrating data from different sources and making it available for analyses and interactive visualisations. The goal is the development of sustainable, reusable software and guaranteeing high data quality.

For the implementation of complex ETL routes (Extract, Transform and Load) we rely mainly on Python, but also on Java, shell scripts, R or Perl, depending on requirements. In addition, various database technologies tailored to the customer’s needs are used. These range from classic relational databases (SQL Server, Oracle or PostgreSQL), document stores (MongoDB) and graphical databases (Neo4j) to highly scalable systems such as Hadoop or Spark.

To host your data, we support you in selecting a suitable provider (catalogue of criteria) or in designing your own hosting. Alternatively, you can host yourself as we work together with external service providers who operate ISO-certified data centres in Germany.

Additionally, we help you to build up your own in-house competencies. This can be done by selecting and using suitable tools or in the form of best practice workshops (e.g. using Spark and Python/R or working in the Azure Cloud).

Check out the other topics we specialise in!