What are some Azure Data Lake use cases?
One of the popular uses cases is combining the simplicity of Microsoft Power BI
and the power of Azure Data Lake. This use cases demonstrates how to deliver a cost-efficient, high-performance analytics driven data lake architecture.
WHY DO I NEED A DATA Lake or WAREHOUSE?
To maximize productivity, your data needs to be organized, processed, and loaded into a data lake, or warehouse. This data warehouse will act as a central repository of data aggregated from disparate sources. The data destination becomes the hub that allows you to plug in your favorite tools so you can query, discover, visualize or model that data.
WHAT DATA VISUALIZATION, BUSINESS INTELLIGENCE, REPORTING OR DASHBOARDING TOOLS CAN I USE?
DO YOU CHARGE FOR Data Lake or WAREHOUSE?
No, there will not be any charges from Openbridge for your data lake or cloud warehouse. Any charges are directly billed by the data destination provider (i.e. Azure, Google, or Amazon) to you.
Does Azure Data Lake pricing use on-demand costing?
Every data lake or cloud warehouse has its own pricing model. Often pricing will vary by usage, which is defined by the compute and storage consumed or provisioned. Depending on your situation and requirements, different price-performance considerations may come into play. For example, if you need to start with a no or low-cost solution, Azure Data Lake only charges according to usage. This may provide you with the essentials to kickstart your efforts. If you have questions, feel free to reach out to us. We can offer some tips and best practices on how best to set up a warehouse based on your needs.
DO YOU FOLLOW Data Lake BEST PRACTICES FOR DATA PARTITIONING WITHIN Azure Data Lake?
Yes! Our approach to partitioning can help reduce the volume of data scanned per query, thereby improving performance and reducing cost for your data lake queries. You can restrict the volume of data scanned because partitions act as virtual columns. When you combine partitions with the use of columnar data formats like Apache Parquet, you are optimizing for best practices.
CAN I USE STANDARD SQL?
Yes, using standard SQL is supported for Azure Data Lake. Most destinations, like Google BigQuery, Amazon Redshift, Azure Data Lake and others support familiar SQL constructs, especially given we use Apache Parquet as the base format. There may be some limitations or best practices for the specific use case, but the rule of thumb is that SQL is available.
DO YOU OPTIMIZE FOR Azure Data Lake data lakes?
Yes! We follow best practices relating to file sizes of the objects we partition, split and compress. Doing so ensures queries run more efficiently and reading data can be parallelized because blocks of data can be read sequentially. This is true mostly for larger files as well as smaller files, generally less than 128 MB, that do not always realize the same performance benefits.
Does amazon Azure Data Lake csv querying work?
Yes, you can query CSV. We have a service that automates loading of CSVs for use in Azure Data Lake. However, using CSV is not the most efficient approach. DZone has published an article we wrote on the subject: Apache Parquet vs. CSV File
Which data lake or cloud warehouse should I be using?
When it comes to building your data strategy and architecture, it's essential to understand which data warehouses should be candidates for consideration. Typically, teams will be asking themselves answers like "How do I install and configure a data warehouse?" or "Which data warehouse solution will help me to get the fastest query times?" or "Which of my analytics tools are supported?" This article covers key features and benefits of five widely used data lake and warehouse solutions supported by Openbridge to help you choose the right one: How to Choose a Data Warehouse Solution that Fits Your Needs
. If you have answers, feel free to reach out to us
Do I need an expert services engagement for Azure Data Lake?
Typically, you will not need services for Azure Data Lake. Most customers are up and running using their Azure Data Lake data quickly. However, if you need support, we do offer expert services
. There may be situations where you have specific needs relating to Azure Data Lake data. These situations can require expert services to tailor Azure Data Lake data to fit your requirements. Ultimately, our mission is helping you get value from data and this can often happen more quickly with the assistance of our passionate expert services team.
Do I need to authorize Openbridge or its Partners my access to Azure Data Lake system?
Yes, typically, Azure Data Lake requires authorization to access your data. You would provide us with the Azure Data Lake authorizations, so we can properly connect to their system. However, there are some situations where companies like Azure Data Lake can "push" data to us. In those cases, we provide them with connection details to our API
or our Data Transfer Service
. Once they have those details, they use that information to connect, authenticate, and deliver data to us.
DO YOU SUPPORT COMPRESSION AND FILE SPLITTING?
Yes! Azure suggests compression and file splitting can have a significant impact to speed up Azure Data Lake queries significantly. The smaller data sizes mean optimized queries, and it will also reduce network traffic with data stored in Amazon S3 to Azure Data Lake. When your data is splittable, Openbridge will do this Azure Data Lake optimization for you. This allows the execution engine in Azure Data Lake to optimize the reading of a file to increase parallelism and reduce the amount of data scanned. In the case of an unsplittable file, then only a single reader can read the file. This only happens in the case of smaller files (generally less than 128 MB).
DO YOU SUPPORT COLUMNAR DATA FORMATS LIKE APACHE PARQUET?
Yes! Azure suggests the use of columnar data formats. We have chosen to use Apache Parquet
vs other columnar formats. Parquet will store data efficiently with column-wise compression, including different encoding and compression, based on data type. Openbridge will automatically handle the conversion of data to Parquet format, saving you time and money, primarily when Azure Data Lake executes queries that are ad hoc in nature. Also, using Parquet-formatted files means reading fewer bytes from Azure Data Lake Storage, leading to better Azure Data Lake query performance.
Does amazon Azure Data Lake cost extra?
Azure does charge for the service. For the current costs, check out the AWS pricing page
Where can I find Azure Data Lake documentation?