Data Management and Mokamel Nutrition

There are several characteristics that data management facilities that we think must be understood as well as making efficient utilisation of them. We decided to highlight some of them in this research which are: data interoperability, data integration, data recovery, data sharing, and data archiving.

In diverse cultures, both the terms “data” and “interoperability” are difficult to be fully understood. However, we find the concept of data interoperability refers to the ability of two or more software or hardware to share and receive information in an understandable way.

In addition, according to Asuncion and Van Sinderen (2010), interoperability faces new challenges in the network, particularly because of the heterogeneity and non-interoperability of the sources.

Get quality help now
writer-Charlotte
Verified
4.7 (348)

“ Amazing as always, gave her a week to finish a big assignment and came through way ahead of time. ”

+84 relevant experts are online
Hire writer

The complicated situation is when information includes non-standard data sources such as folders, spreadsheets, or data reached through the Internet. However, Parent and Spaccapietra (2000) say interoperability is a perfect instrument that is required to overcome these problems by allowing heterogeneous systems to communicate in a meaningful way.

Interoperability can be accomplished in three levels: technological, syntactic, and organizational level (Hatzivasilis et al.

, 2018). At the lowest level, the different underlying communication protocols should operate seamlessly on heterogeneous devices. At the syntactic level, data exchanged between different software/hardware should have agreed encodings and interfaces. At the organizational level, interoperability is inferred between different organizations that wish to exchange information despite having different internal structures.

Thus, all information is shared between one company with different organisations (Moon et al., 2008). This level of interoperability has shown to be challenging for several businesses because it needs to manage the conflict between the control of the sources. As well as imposing some privacy issues.

Data integration is a data management framework designed so that data from different data source platforms can be cleaned and fused in an integrated fashion (Chen et al., 2014). Therefore, many databases, particularly business databases, can be generated with new data to support new applications by consolidating data from existing internal and external data sources.

One of the important aspects of integration is that you can make the online social media networks work well with each other by connecting them to reach the true extent of the audience (Chen et al., 2014). A data integration system is a system that collects and retains the data, incorporates, and produces novel data from various data sources, does data analysis, and does validation checks on the data.

In addition, in order to use data integration, you can consider various data integration principles for your work. Data integration can be done using one of these two general methods, data federation and data propagation, with each goal and each optimal solution in different conditions. The federation creates a graphical picture of the unified data in a centralised, physical database, without saving all the data (Hoffer et al., 2016). In the meantime, data propagation uses real time duplicate data to build a new view (Hoffer et al., 2016). This gives an advantage over using data federation as it uses near-real-time cascading of data changes throughout the organization.

Data exchange is applied in several tasks that involve data to be transferred among existing, individually created applications (Fagin et al., 2005). In database management systems (DBMs), data defined in one schema (source schema, as it is also known) has to be translated & restructured into an instance of a different schema (a target schema) (Fagin et al., 2005). This problem of how to emerge various databases that have dissimilar schemas is a major challenge that needs to be tackled (Arenas et al., 2014).

Throughout the years, the requirement for systems aiding data exchange has persisted (Fagin et al., 2005). However, as the production of web data that is collected in various layouts (e.g., semi-structured schemas like XML/DTDs schemas, conventional relational database schemas, & several scientific formats) & as the terrain for data exchange has grown, this need has become more pronounced (of systems supporting data exchange) (Fagin et al., 2005).

Overall, data exchange can help to understand the ways of working among personnel within an organisation. For example, within organisations, employees would be required to exchange data because, when the employees want to send data to their consumers/colleagues, it is important different systems can read the data (to complete specific operations collaboratively).

There might be some confusion about the three terminologies: data exchange, data interoperability, & data integration. The purpose of data integration is to synthesise data from multiple sources in a single “view” according to a “global” scheme normally independent of each other (Kolaitis, 2005). In data exchange, the aim is to transfer data and how we can pull together data from two separate sources so that it can be as effective and approximate as possible of the source data.

There are a broad number of data collecting approaches for how the organizations share data. This might seem to be an explicit opportunity for data sharing to grow. A sharing data is sharing of the same data resources with different applications or users (Wallis et al., 2013). Therefore, data sharing can benefit the recipients as well as the data receivers. However, there were various possible concerns with the sharing of data in both methods. According to Neylon (2017); Sayogo and Pardo (2013), mentions data-sharing issues are more on ethical concerns and control issues. There are numerous challenges and barriers to data sharing which impede data sharing efforts, we highlight one in this study, regulation and legal.

A regulation and legal will also help researchers can invoke the rigidity of rules and regulations as legitimate reasons for avoiding sharing knowledge on the grounds of privacy or regulation. In order to enhance data exchange by maintaining proper information, it is important, according to Sayogo and Pardo (2013), to take account of the lessons gained from cross-agency information sharing, legal regulations and strategies, given the absence of policies that do not guarantee neutral contact. Thus, data processing capabilities are required to facilitate data sharing and regulations are essential to secure data sharing (Neylon, 2017).

Every database which stores history (e.g., history of transactions) will ultimately comprise outdated data (data no longer having any use) (Hoffer et al., 2016). Through employing database statistics, for example, this can reveal frequencies of location access for pages/records (which can indicate data no longer having a purpose if frequency is low) (Hoffer et al., 2016).

Furthermore, depending on the regulations that exist within the organisation, this may reveal that older data (e.g., that is six years) is not required to be maintained for active processing (Hoffer et al., 2016). However, this does not necessarily mean that data should simply be discarded (e.g., due to occasionally required business intelligence queries, legal reasons, & so on) (Hoffer et al., 2016). Therefore, the administrations of the database need to produce a program of archiving unused data (Hoffer et al., 2016).

Data could be archived towards files stored beyond the database (e.g., optical storage or magnetic tape), or towards separate database tables (therefore making the active tables more efficient). Archive files could additionally be compressed to preserve space (Hoffer et al., 2016).

Moreover, techniques should be established to restore (in adequate time) archived data to the database (for when & if they are required). This is as data that is archived is not completely obsolete, but rather inactive (Hoffer et al., 2016). Through archiving, this saves the disk storage costs, reclaims disk space, and may enhance the functioning of the database by permitting the active data to be stored in a space that is cheaper (Hoffer et al., 2016).

Cite this page

Data Management and Mokamel Nutrition. (2023, Mar 15). Retrieved from https://paperap.com/data-management-and-mokamel-nutrition/

Let’s chat?  We're online 24/7