Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>There are several tools available, notably through ODBC vendors (I work for one: Attunity).</p> <p>1 >> Tools to retrieve schema of indexed files</p> <p>Please clarify. Looking for just record/column layout and indexes within the files or also relationships between files.</p> <p>1a) How are the files currently being used? Cobol, Basic, Fortran programs? Datatrieve? They will be using some data definition method, so you want a tool which can exploit that. Connx, and Attunity Connect can 'import' CDD definitions, BASIC - MAP files, Cobol Copybooks. Variants are typically covered as well. I have written many a (perl/awk) script to convert special definition to XML.</p> <p>1b ) Analyze/RMS, or a program with calling RMS XAB's can get available index information. Atunity connect will know how to map those onto the fields from 1a)</p> <p>1c ) There is no formal, stored, relationship between (indexed) files on OpenVMS. That's all in the program logic. However, some modestly smart Perl/Awk/DCL script can often generate a tablem of likely foreign/primary keys by looking at filed names and datatypes matches.</p> <p>How many files / layouts / gigabytes are we talking about?</p> <p>2 >> Tools to parse indexed files</p> <p>Please clarify? Once the structure is known (question 1), the parsing is done by reading using that structure right? You never ever want to understand the indexed file internals. Just tell RMS to fetch records.</p> <p>3 >> Tools to deal with custom RMS data formats (zoned decimals etc) as a bundle/API/Library</p> <p>Again, please clarify. Once the structure is known just use the 'right' tool to read using that structure and surely it will honor the detailed data definitions.</p> <blockquote> <blockquote> <p>(I know it is quite simple to write one yourself, just thought there would be something in the industry)</p> </blockquote> </blockquote> <p>Famous last words... 'quite simple'. Entire companies have been build and thrive doing just that for general cases. I admit that for specific cases it can be relatively straightforward, but 'the devil is in the details'.</p> <p>In the Attunity Connect case we have a UDT (User Defined data Type) to handle the 'odd' cases, often involving DATES. Dates in integers, in strings, as units since xxx are all available out of the box, but for example some have -1 meaning 'some high date' which needs some help to be stored in a DB.</p> <p>All the databases have some bulk load tool (BCP, SQL$LOADER). As long as you can deliver data conforming to what those expect (tabular, comma-seperated, quoted-or-not, escapes-or-not) you should be in good shape.</p> <p>The EGH tool Vselect may be a handy, and high performance, way to bulk read indexed files, filter and format some and spit out sequential files for the DB loaders. It can read RMS indexed file faster than RMS can! (It has its own metadata language though!)</p> <p>Attunity offers full access and replication services. They include a CDC (change data capture) to not a only load the data, but to also keep it up to date in near-real-time. That's useful for 'evolution' versus 'revolution'. Check out Attunity 'Replicate'. Once you have a data dictionary, just point to the tables desired (include, exlude filters), point to a target DB and click to replicate. Of course there are options for (global or per-table) transformations (like an AREA-CODE+EXHANGE+NUMBER to single phone number, or adding a modified date columns ).</p> <p>Will this be a single big switch conversion, or is there desire to migrate the data and keep the old systems alive for days, months, years perhaps, all along keeping the data in close sync?</p> <p>Hope this helps some, Hein van den Heuvel.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload