Dbm (discuter | contributions) (→SQLfast) |
Dbm (discuter | contributions) (→SQLfast) |
||
Ligne 238 : | Ligne 238 : | ||
::''<b>Keywords</b>'': non-relational data model, key-value model, object model, NoSQL, schema-less database, multivalued property, composite property, document-oriented DBMS, MongoDB, CouchDB, Azure, Datastore Oracle, metadata, index, data migration, schema conversion | ::''<b>Keywords</b>'': non-relational data model, key-value model, object model, NoSQL, schema-less database, multivalued property, composite property, document-oriented DBMS, MongoDB, CouchDB, Azure, Datastore Oracle, metadata, index, data migration, schema conversion | ||
− | :*<b>Case 9. Temporal databases - Part 1</b>, draft version, <i> | + | :*<b>Case 9. Temporal databases - Part 1</b>, draft version, <i>October 07, 2018.</i> [https://staff.info.unamur.be/dbm/Documents/Tutorials/SQLfast/SQLfast-Case09-Temporal-DB(1).pdf [full text]] |
::''<b>Objective</b>'': In this study we examine various ways to organize the data describing the evolution of a population of entities. The basic model consists in storing the successive states of each entity, completed by the time period during which the state was observable. We distinguish between the transaction time, that refers to the data modification time in the database and the valid time, referring to modification events of entities in the real world. This study particularly develops entity-based, attribute-based and event-based temporal database models. In these models, data management is ensured by triggers that automate as far as possible entity creation, modification and deletion operations. | ::''<b>Objective</b>'': In this study we examine various ways to organize the data describing the evolution of a population of entities. The basic model consists in storing the successive states of each entity, completed by the time period during which the state was observable. We distinguish between the transaction time, that refers to the data modification time in the database and the valid time, referring to modification events of entities in the real world. This study particularly develops entity-based, attribute-based and event-based temporal database models. In these models, data management is ensured by triggers that automate as far as possible entity creation, modification and deletion operations. | ||
::The next study will be devoted to temporal database querying. | ::The next study will be devoted to temporal database querying. | ||
Ligne 266 : | Ligne 266 : | ||
::''Chapter contents'': Database performance evaluation. Generating high volume of synthetic data. Integrating heterogeneous data sources. Data cleaning. Data anonymization. Random data extraction. Executing very large scripts. Query performance. | ::''Chapter contents'': Database performance evaluation. Generating high volume of synthetic data. Integrating heterogeneous data sources. Data cleaning. Data anonymization. Random data extraction. Executing very large scripts. Query performance. | ||
− | :*<b>Case 27. Conway's Game of Life</b>, draft version, <i> | + | :*<b>Case 27. Conway's Game of Life</b>, draft version, <i>January 18, 2018.</i> [https://staff.info.unamur.be/dbm/Documents/Tutorials/SQLfast/SQLfast-Case27-Life-Game.pdf [full text]] |
::''<b>Objective</b>'': This study is about games, worlds, life and death, borderline SQL applications and dramatic database optimization. The goal of the project is to implement the graphical animation of Conway’s cellular automata, aka Game of Life. A game of life is made up of an infinite array of cells in which live a population of small animals, each of them occupying one cell. The transition of one state of the population to the next one is specified by a set of simple computing rules. The goal of the game is to observe and study the evolution of the population. A game of life is implemented as a table in a database in which each row contains the coordinates and the content of a cell. The algorithms developed in this study load the initial state of a population then compute the next states thanks to the evolution rules. Finally, they visualize this evolution as an animated cartoon. The contribution of this study is twofold. It stresses the importance of database and algorithm optimization (the last version is 1,400 times faster than the first one) and it shows that relational databases and SQL may be quite efficient to develop matrix manipulation procedures (the SQL version is nearly 7 times faster than the equivalent Python program). | ::''<b>Objective</b>'': This study is about games, worlds, life and death, borderline SQL applications and dramatic database optimization. The goal of the project is to implement the graphical animation of Conway’s cellular automata, aka Game of Life. A game of life is made up of an infinite array of cells in which live a population of small animals, each of them occupying one cell. The transition of one state of the population to the next one is specified by a set of simple computing rules. The goal of the game is to observe and study the evolution of the population. A game of life is implemented as a table in a database in which each row contains the coordinates and the content of a cell. The algorithms developed in this study load the initial state of a population then compute the next states thanks to the evolution rules. Finally, they visualize this evolution as an animated cartoon. The contribution of this study is twofold. It stresses the importance of database and algorithm optimization (the last version is 1,400 times faster than the first one) and it shows that relational databases and SQL may be quite efficient to develop matrix manipulation procedures (the SQL version is nearly 7 times faster than the equivalent Python program). | ||
::This study is also a tribute to E. F. Codd, the inventor of the relational model of databases, who first studied self-replicating cellular automata. | ::This study is also a tribute to E. F. Codd, the inventor of the relational model of databases, who first studied self-replicating cellular automata. | ||
::''<b>Keywords</b>'': cellular automata, replicating system, Conway, glider, Codd, matrix manipulation, algorithm optimization, database optimization, declarative algorithm, table indexing, in-memory database, CTE, recursive query, vector graphics, SQLdraw, animated simulation, Python. | ::''<b>Keywords</b>'': cellular automata, replicating system, Conway, glider, Codd, matrix manipulation, algorithm optimization, database optimization, declarative algorithm, table indexing, in-memory database, CTE, recursive query, vector graphics, SQLdraw, animated simulation, Python. | ||
− | :*<font color="black"><b>Case 28. From data bulk loading to database book writing</b>, draft version, <i> | + | :*<font color="black"><b>Case 28. From data bulk loading to database book writing</b>, draft version, <i>January 25, 2018.</i></font> [https://staff.info.unamur.be/dbm/Documents/Tutorials/SQLfast/SQLfast-Case28-Topo-sort.pdf [full text]] |
::''<b>Objective</b>'': When data have to be loaded in a database from an external source, the order in which tables are filled is important as far as referential integrity is concerned. This order is determined by the directed graph formed by tables and foreign keys. From this graph one have to derive a linear ordering that represent one of the valid order in which table data are loaded. This derivation is called topological sorting, for which this chapter discusses and implements a simple algorithm. However, things are a bit more complex when the graph is not acyclic, as is often the case for database loading. Therefore, the chapter studies ways to transform a graph that includes circuits into a purely acyclic graph. These techniques are also applied to the ordering of topics when planning the writing of a book. | ::''<b>Objective</b>'': When data have to be loaded in a database from an external source, the order in which tables are filled is important as far as referential integrity is concerned. This order is determined by the directed graph formed by tables and foreign keys. From this graph one have to derive a linear ordering that represent one of the valid order in which table data are loaded. This derivation is called topological sorting, for which this chapter discusses and implements a simple algorithm. However, things are a bit more complex when the graph is not acyclic, as is often the case for database loading. Therefore, the chapter studies ways to transform a graph that includes circuits into a purely acyclic graph. These techniques are also applied to the ordering of topics when planning the writing of a book. | ||
::''<b>Keywords</b>'': data loading, database schema, (non) acyclic graph, topological sorting, strongly connected components, graph contraction, condensation of a graph, transaction management. | ::''<b>Keywords</b>'': data loading, database schema, (non) acyclic graph, topological sorting, strongly connected components, graph contraction, condensation of a graph, transaction management. | ||
Ligne 278 : | Ligne 278 : | ||
::''<b>Objective</b>'': This chapter tackles a widespread optimization problem: computing the shortest path between two cities. The solving technique is based on Dijkstra’s algorithm. It also is applied to two similar applications domains, namely maze solving and controlling a rover on a hostile planet. A general purpose, application independent, solving tool is developed. | ::''<b>Objective</b>'': This chapter tackles a widespread optimization problem: computing the shortest path between two cities. The solving technique is based on Dijkstra’s algorithm. It also is applied to two similar applications domains, namely maze solving and controlling a rover on a hostile planet. A general purpose, application independent, solving tool is developed. | ||
::''<b>Keywords</b>'': optimization, shortest path, Dijkstra’s algorithm, maze solving, rover control. | ::''<b>Keywords</b>'': optimization, shortest path, Dijkstra’s algorithm, maze solving, rover control. | ||
+ | |||
+ | :*<font color="black"><b>Case 34. Blockchains</b>, draft version. <i>February 1, 2019.</i> [https://staff.info.unamur.be/dbm/Documents/Tutorials/SQLfast/SQLfast-Case34-Blockchains.pdf [full text]]</font>. | ||
+ | ::''<b>Objective</b>'': In this study, we examine some fundamental aspects of blockchains, particularly the security of data and the way(s) it is achieved through cryptographic transformations. Basically, a blockchain is a historical database in which the description of operations, generally called transactions, are stored in chronological order. Once recorded, the data of a transaction can never be deleted nor modified. | ||
+ | ::The document first introduces the elements of cryptography necessary to build a blockchain, notably secure hashing, and symmetric and asymmetric key encryption. Then, it describes the distinctive aspects of blockchains independently of its application domain and applies them to cryptocurrencies. Finally an experimental toolbox, comprising a collection of functions designed to manage and explore blockchains, is built step by step. | ||
+ | ::''<b>Keywords</b>'': blockchain, blockchain explorer, proof of work, distributed database, cryptocurrency, trust, security, cryptography, RSA, AES, secure hashing. | ||
+ | |||
<br> | <br> | ||
<!-- ------------------------------------------------------------------------------ --> | <!-- ------------------------------------------------------------------------------ --> |
<Retour à la page d'accueil / Back>
Sommaire |
Case studies in preparation