Table Advice

I have a requirement to create an employee job tracking system and want some advice on the tables. Here is the flow of what happens:
An employee is assigned about 10 jobs to start with. For each job an employee is assigned, I need to track their status for each day.
So for each record in the employee job table, there would be 365 records tied to it in the Employee_Job_Detail table.
If an employee switches jobs mid year, and it seems to happen frequently, I need to update the employee job table (job_status column) for all records from that date to the end of the year. I would also need to create another 365 records in that table for the new job they are assigned. So for each transaction I may be updating a couple hundred record and inserting another 365.
I proposed that tables below. The job-detail table is small, but I am concerned about performance with all the records and the DML that will occur. I thought about using arrays, but not sure if that would help. Any advice would be appreciated.
Thanks!
Employee_job
PK_ID
EMPLOYEE_ID
EMPLOYEE_JOB
unique index (emp_id,emp_job)
Employee_Job_Detail
PK_ID
FK_EMPLOYEE_JOB foreign key employee_job refernces(PK_ID)
JOB_DATE
JOB_STATUS
unique index (fk_employee, job_date)

bobmagan wrote:
I have a requirement to create an employee job tracking system and want some advice on the tables. Here is the flow of what happens:
An employee is assigned about 10 jobs to start with. For each job an employee is assigned, I need to track their status for each day.
So for each record in the employee job table, there would be 365 records tied to it in the Employee_Job_Detail table.
If an employee switches jobs mid year, and it seems to happen frequently, I need to update the employee job table (job_status column) for all records from that date to the end of the year. I would also need to create another 365 records in that table for the new job they are assigned. So for each transaction I may be updating a couple hundred record and inserting another 365.
I proposed that tables below. The job-detail table is small, but I am concerned about performance with all the records and the DML that will occur. I thought about using arrays, but not sure if that would help. Any advice would be appreciated.
Thanks!
Employee_job
PK_ID
EMPLOYEE_ID
EMPLOYEE_JOB
unique index (emp_id,emp_job)
Employee_Job_Detail
PK_ID
FK_EMPLOYEE_JOB foreign key employee_job refernces(PK_ID)
JOB_DATE
JOB_STATUS
unique index (fk_employee, job_date)Is there more to it than you are showing? Seems to be not terribly useful with the only real info from the system being job status. Since proper table design depends on knowing all of the data elements and their relationships, it's nigh unto impossible with what little you've shown, and not knowing how the data is really used.
First thing I think of is why 365 daily rows in the detail table? Why not just a row when the status changes?

Similar Messages

  • How to get Payment Advice details using REGUH  REGUP tables

    Hi,
    How to fetch payment Advice details in REGUH  REGUP tables by passing Header details of Payment document (BEKNR, BUKRS and GJAHR) ?
    My observation:
    E.g. for Payment document 1500000135 i am getting following records from REGUH table
    MANDT     LAUFD             LAUFI     ZBUKR    LIFNR                       VBLNR
    300            15.04.2010     00009O     1000     AB6666               1500000135
    300            16.04.2010     00002O     1000     AB6666               1500000135
    300            17.04.2010     00001O     1000     AB6666               1500000135
    While debugging driver program for Check print i have observed that REGUH contain entry as
    300            17.04.2010     00001*                 1000             AB6666               1500000135
    For other document also i have found that its picking up last entry from REGUH (i have relate 00001O with 00001*) and accordingly its displaying records on Check
    Correct if my observation is wrong
    Whats the meaning of LAUFI 'Additional Identification' ?
    While displaying Check is the program always picks last/latest payment run entry from REGUH?
    thanks,
    Edited by: Amol Deshpande on Apr 19, 2010 9:41 AM

    Hi Suhas,
    But if we are making payments through F-58 and FBZ5 transaction then these both tables REGUH REGUP also get updated. in such case how does system determines --> Payment Run id
    Payment Run id (REGUH-LAUFI) is crucial for me to fetch Payment details from REGUP table
    also tell me when does system generates multiple lines for single payment document in REGUH ?
    Please share your views

  • Payment Advice Table

    Hi Guys,
    What r the payment advice tables...... pls help

    Hi,
    Try
    AVIK       Payment Advice Header
    AVIP       Payment Advice Line Item
    AVIR       Payment Advice Subitem
    Thanks
    Aravind
    ASSIGN points if helpful

  • Sql advice for updating a table.

    Hello Wonderful guru's,
    (I am running Oracle 10g 10.1.0.2.0 on Linux AS3) I am stuck trying to delete a user from a user table when the user has not logged into "a software application" for 90 days. The tables are USER: (userid V2(15), name V2(50),password V2(30),update_date date, password_expire_date date, blocked V2(1)).
    Other table is LOCATION_HISTORY: (Location_id Number(3),Userid V2(15),Login_Date Date).
    I have a trigger that populates the Location_history table every time someone logs in to the software that populates who,when and where for each login- there are multiple login dates per id.
    I need to check if the max(Login_date) is 90 days from the sysdate, and if so then delete the userid from the USER table- because the user has not logged in for 90 days, therefore the account is not "active" enough to keep.
    I have an idea how to do it as follows:
    delete from user where (select userid, max(login_date) from location_history where max(login_date) = sysdate-90;
    But I am wondering if I am getting the logic right. I was going to run this sql as a cron job every night after midnight.
    Can you offer me advice to whether I am approaching this from the right angle?
    Thank you in advance for any advice- I do not have anyone to bounce stuff like this off of.

    user542952 wrote:
    delete from user where (select userid, max(login_date) from location_history where max(login_date) = sysdate-90;If that is your logic you can create a job in the Oracle Database itself which will run every day or any frequency you wish to execute the job.
    The package you can use is DBMS_JOB or DBMS_SCHEDULER
    DBMS_SCHEDULER is preferred and it is available in the version you have mentioned.
    [ Documentation | http://download.oracle.com/docs/cd/B14117_01/appdev.101/b10802/d_sched.htm#1012994]
    SS

  • Need advice on preventing duplicate entries in People table

    Hi,
    In my database, I have a "People" table where I store basic information about people e.g. PersonId, FirstName, LastName, Gender, etc.
    There will be lots of entries made into this table and I want to prevent duplicate entries as much as humanly possible. I'd appreciate some pointers on what I should do to minimize duplicates.
    My primary concerns are:
    Duplicate entries for the same person using the person's full name vs. given name e.g. Mike Smith and Michael Smith
    Making sure that two separate individuals with identical names do get entered into the table and get their unique PersonId's.
    Not even sure how I can even possibly know if two individuals with identical names are two different people without having additional information but I wanted to ask the question anyway.
    Thanks, Sam

    Thank you all very much for your responses.
    There are three separate issues/points here.
    It is clear that it is impossible to prevent duplicates using only a person's first, middle and last names. Once I rely on an additional piece of information, then things get "easier" though nothing is bullet proof. I felt that this was self evident but
    wanted to ask the question anyway.
    Second issue is "potential" duplicates where there are some variations in the name e.g. Mike vs Michael. I'd like a bit more advice on this. I assume I need to create a table to define variations of a name to catch potential duplicates.
    The third point is what Celko brought up -- rather nicely too :-) I understand both his and Erland's points on this as typical relational DB designs usually create people/user tables based upon their context e.g. Employees, Customers, etc.
    I fundamentally disagree with this approach -- though it is currently the norm in most commercial DB designs. The reason for that is that it actually creates duplicates and my point is to prevent them. I'm going for more of an object based approach in the DB
    design where a person is a person regardless of the different roles he/she may play and I see no reason in repeating some of the information about the person e.g. repeating first, last name, gender, etc in both customer and employee tables.
    I strongly believe that all the information that are directly related to a person should be kept in the People table and referenced in different business contexts as necessary.
    For example, I assign every person a PersonId in the People table. I then use the PersonId as part of the primary key in the Customers or Employees table as well. Obviously, PersonId is also a foreign key in Customers and Employees tables. This prevents the
    need for a separate CustomerId and allows me to centralize all the personal data in the People table.
    In my opinion this has three advantages:
    Prevent duplication of data
    Allow global edits e.g. if the last name of a female employee changes, it is automatically updated for her within the context of "Customer" role she may play in the application.
    Last but not least, data enrichment where a person may enter additional data about himself/herself in different contexts. For example, in the employee context, we may have the person's spouse information through "Emergency Contacts" which may come handy
    within the context of customer for this person.
    Having everyone in the People table gives me these three advantages.
    Thanks, Sam

  • Need View/Table Linking Advice

    Post Author: scotthibbard
    CA Forum: Data Connectivity and SQL
    If someone could provide some advice on this issue I'd appreciate it...
    I am using CR 11.5 Dev Edition. I used the Database Expert to add a view to an existing report. I get two error messages saying the following:
    1) "Your current link configuration contains multple starting points. Please be advised that this is generally not supported."
    2) "Database Connector Error: 'If tables are already linked then the join type cannot change.'"
    At present the view is unlinked (I presume this is the cause of error #1); however, I get error #2 and a blank report returned whether I link the view or not.
    Thanks,
    Scott

    Post Author: JimW
    CA Forum: Data Connectivity and SQL
    The errors definitely seem to indicate a linking problem although you'd think the SQL wouldn't work outside Crystal if that were the problem.  The nested views shouldn't cause a problem as long as the view itself is returning what you expect.
    I assume your Join Types are ok.  Do you have any SQL Expressions?  Do you have any subreports?
    I think I'd have to clear links and relink again just to be sure (and because it's so fun).  One thing to check is to scroll all the way in the link diagram to make sure there isn't a rogue table sitting out there that got added by mistake, although that's a shot in the dark.
    ...Jim...

  • Please advice about table partitioning

    Hi.
    I need some advice on table partitioning. Here is the problem : My application procesess 7 types of documents (all like invoices). I have created master/detail tables for each type of document. But the are no big difrences between documents, so I was planing to move all of this document types to one master and one detail table and use table partitioning based od document type. This would to simplifay my programming very much... but I am not sure how will this effect on application performance ??? We process a large number of documents (over 300 a day).
    So what do you think ?

    1) If you are using the standard edition, partitioning is not an option. Partitioning is an extra-cost licensing option that is only available to enterprise edition customers.
    2) I am having a bit of difficulty reconciling the numbers you are providing with your descriptions. An application that inserts 5 MB of data a day and adds fewer than 2 million rows per year is a pretty small database, particularly to consider partitioning for. 50 active users doesn't seem terribly large. It is far from obvious to me that separate schemas by year provides any sort of benefit overall. OLTP operations should be able to use appropriate indexes to avoid all the read-only data. And a single schema with a single set of tables minimizes the number of hard parses you have to do, the number of queries that have to be rewritten every year to go after new schemas of data, etc
    Justin

  • Transferring data between two tables - Best practise advice

    Hi!
    I need advice on best practise since I am new to abap-thinking.
    I have two tables. I am going to transfer data from table1 and update the corresponding master data table with the data in table1.
    Which is the best way of doing this? The data amount that can be transferred is maximum 300 000 rows in table1.
    I can only think in one, the simple, way which is to read all the rows in to an internal table and then do an update on all the rows in the master data table.
    Is there a better way of doing this?
    thanks in advance,
    regards
    Baran

    Hi!
    1. The update will be done a couple of times per week.
    2. Yes, the fields are the same.
    3. Both tables are SAP dictionary tables. One is a staging table and the other is master data table. Our problem is that we want a custom field to a standard master data table. We add an extra field to the staging table and the same to the corresponding master data table but the standard API is not supporting the transfer of data between custom fields so we are developing our own code to do this.
    After some standard code has transferred the standard fields from staging tables to master data tables we are going to transfer our field by updating all the rows in the standard table
    thanks
    regards
    Baran

  • Beginner need advice for PS table

    Dear All,
    Kindly advice the function and name of  PS table that are normally use in PS module. Such as PRPS,PROJ,AUFC.
    Many thanks in advance.
    Nies

    Hi,
    Please find detail list of tables in SAP-PS.
    Master Data:
    PROJ Project definition
    PRPS WBS elements
    PRTE Scheduling data
    PRHI WBS hierarchy
    AUFK Orders/networks headers
    AFKO Production Orders/networks
    AFVC Network activities
    AFVU Network activities
    AFVV Network activities
    RESB Network Components
    MLST Milestones
    Transaction data and totals:
    RPSCO Project info database (cost, revenues)
    RPSQT Project info database (quantities)
    COSP Cost totals for external postings
    COSS Cost totals for internal postings
    COSB Total variances/result analysis
    COEP Line items, actuals
    COOI Line items, commitments
    COEJ Line items, planned orders
    BPGE Budget, overall cost
    BPJA Budget, annual values
    QBEW Project stock valuation
    MSPR Project stock (incl. non-valuated)
    AUFK Order master data
    AFVU DB structure of the user fields of the operation
    AFKO Order header data PP orders
    EBAN Purchase Requisition
    EBKN Purchase Requisition Account Assignment
    JEST Individual Object Status
    LFM1 Vendor master record purchasing organization data
    EKKO Purchasing Document Header
    EKPO Purchasing Document Item
    CSKU Cost Element Texts
    CSKT Cost Center Texts
    ANLA Asset Master Record Segment
    JCDS Change Documents for System/User Statuses (Table JEST)
    COSP CO Object: Cost Totals for External Postings
    COSS CO Object: Cost Totals for Internal Postings
    COEP CO Object: Line Items (by Period)
    BKPF Accounting Document Header
    BSEG Accounting Document Segment
    CSKS Cost Center Master Data
    COBK CO Object: Document Header
    CEPC Profit Center Master Data Table
    PRPS WBS (Work Breakdown Structure) Element Master Data
    PROJ Project definition
    PRHI Work Breakdown Structure, Edges (Hierarchy Pointer)
    TJ02 System status
    BPJA Totals Record for Annual Total
    Hope this will solve your problem.
    Regards,
    Rakesh Pradhan

  • Advice for custom transformation script to change table FK Associations

    Hello,
    I am attempting to write a transformation script that will cycle through tables in a large model and modify some
    FK associations to change the remote table in the FK. As an aside, the reason for this is that our logical model
    has logical entities that represent many small lookup code domains. However our relational/physical model has a
    single utility code repository physical table which contains all these code domains. So part of the transformation
    from our logical model to relational model is to swing all the FK associations for these lookup code domains
    into the common code table. Then the tables representing the lookup domains can be hard deleted.
    In any case, I'm looking at the
    oracle.dbtools.crest.model.design.relational.FKIndexAssociation
    or the
    oracle.dbtools.crest.model.design.relational.Index
    to see if one of these can be manipulated to perform the transformation I want to accomplish.
    When I perform the equivalent process manually via property sheets there are several ways of accomplishing this.
    a) The neatest method seems to be to modify the existing FK in place by clicking on the FK in the relational diagram and then changing
    the property called 'PK / UK Index'. Change that to the PK of the utility code repository table that is replacing the original
    table (representing the lookup domain). A side effect of this results in the mapping of the FK
    columns being nulled out. So they need to be remapped to the original columns (which are retained).
    So in two steps the existing FK is detached from the original table, reattached to the new table and then
    the FK columns are reused in place for the new FK mapping.
    b) The brute force method is to delete the original FK and then create an entirely new FK to the new table. There are
    multiple dialogs which allow you to retain the original FK columns and reuse them in the new FK.
    I realize this is a somewhat complex transformation but I'd appreciate some algorithmic advice on which path to take
    to try to accomplish this. In particular, is there a scripting equivalent to the first process above?
    I don't even know if the FK that is represented on the relational diagram is represented by the FKIndexAssociation or Index
    class in the scripting object model. In other words, as I loop through the tables in the relational model what would be my
    starting point:
    fks = table.getFKAssociationsList();
    or
    fkIndexes = table.getAllInds_FKeyInds();
    or something else. The names of the properties in the user interface don't always match with the scripting properties so
    it's a little tricky to figure this stuff out.
    Once I get access to the object that is equivalent to the FK in the diagram/property sheet, what
    is the best way to manipulate it via scripting. How can I change the parent/remote end of the FK to point to the new table.
    Or if I have to delete the original FK and then recreate it - is there a way to save the original FK columns and reuse them
    while creating the new FK?
    One other question. What is the best way to delete tables from a relational model. I note that there is an undocumented (?) remove()
    method that is available on Table. It is probably on a superclass so other things can be removed as well.
    When I try to use this method in a loop the script throws up dialogs similar to the dialogs that appear while performing a delete
    manually via the UI. Is there any way to intercept and respond to the dialogs programmatically?
    So for instance if I get a dialog popup like
    Do you want to delete generated FK columns 'FOO' in table 'BAR'?
    is there a way to intercept that dialog in the script and then answer Yes or No to it via the script?
    If it turns out to be too difficult to perform this type of transformation via scripting the fallback
    is to perform the transformation on the generated DDL file. But it would be cleaner to be able
    to perform the transformation in the DM Relational model if possible.
    Any advice appreciated.
    Rgds, BP

    Philip,
    This is good info. I was able to get a rough version of my transformation script to work.
    However I'd like to clean it up a bit.
    In your notes it is not clear how the FKIndexAssociation and associated Index are related.
    Is there a getter on FKIndexAssociation that references the associated Index?
    It looks like the method fk.changeKeyObject() takes an Index object as its first parameter.
    What would be the cleanest way to extract the PK Index from the Code table that I want
    to point to? I used something like:
    cdTable = model.getTableSet().getByName("CD_REPOSITORY");
    cdPkIndex = cdTable.getIndexByName("CD_REPOSITORY_PK");
    I used the latter method because it's the only one that returns an object of type Index
    which is required as input to FKIndexAssociation.changeKeyObject(). However this method
    requires the name of the index. That's fine but I'd like to just grab the PK's supporting index
    directly if possible (without knowing the name).
    I'm curious if there is a getter on Table or FKIndexAssociation that can just grab the index of the PK of
    a table directly?
    I don't see such a getter on FKIndexAssocation. On the Table class there are many getters that might be
    of some use but the only one I could get to work was getIndexByName() as shown above.
    I tried to use getIndexes() and then use isPk() on the result of that in a loop but that didn't return any result.
    It looks like getters getIndexes(), getPKs(), getUKeys() return collections of DesignObject which are just the
    superclass of FKIndexAssociation which is not what I need (i.e. which IS the actual underlying Index object).
    So I guess the general question is what is the best way to get access to the Index objects in two scenarios.
    First for the PK and UKs (typically on the 'parent-referenced' table) and secondly for FKs (typically on the 'child-referencing' table).
    If I were imagining how this would work I would imagine getters on the Table for the former and a getter
    on the FKIndexAssociation for the latter. But that's not how it works.
    Also, as an aside - this 'Index' is somewhat confusing since these seem to be 'potential indexes' and not true database indexes.
    These 'indexes' seem to be more closely related to database constraints and not actual indexes. I know that Primary Keys are
    implemented in Oracle via a unique index but I'd like to confirm that the class
    oracle.dbtools.crest.model.design.relational.Index
    is not the same thing as a DDL database Index.
    Secondary question. I notice when I run the script which loops through all the tables and redirects the FKs - the
    diagram does not refresh. I had to save the model and then reload it for the graphical representation
    of the FKs to change and redirect to the new table.
    Is there a way to force the refresh to occur right away?
    I've noticed that other transformation scripts I've written HAVE resulted in the diagram being refreshed
    right away (even while the script is running). In this case it does not.
    Rgds, BP

  • Advice needed on designing schema to accomodate multiple transaction tables.

    Hi,
    The attached images shows my current schema. It consists of three transaction tables, a product table and a calendar table.
    - Background -
    The product table 'Q1 Data Set' contains all unique sales. In addition it also contains a number of columns by which I will later filter my pivot tables (e.g. whether the customer of the order is new/returning). This
    table also contains a column named 'DateOrdered',the date the order was originally placed (but not paid). 
    Each sale that is paid can be done so either in a single transaction, or across multiple transactions of different transaction types.
    An example of a sale  paid in multiple parts would be an order that has three transactions;
    one online (table 'trans_sagepay',
    one over the phone (table'trans_epdq')
    and another by card (table'trans_manual'). Furthermore there can be more than one transaction of each type for an sale.
    I have created measures which total the sales in each transaction table.
    Each transaction has a 'transaction_date' which is the date of that individual transaction.
    The calendar is simply a date table that has some friendly formatted columns for laying out pivot tables. An example column
    is FiscalMonthAbbrv which displays months similar to '(04) - January'
    to accommodate our fiscal year.
    - Problem -
    My problem is that I need the ability to create some tables that have the
    Date Ordered as the rows (listed by Year>Month), and I need to produce other tables that have
    Transaction Date as the rows.  
    Date ordered works fine, however the problem comes when I try and create a table based on the transaction date.
    With the current model seen in the attached image I cannot do it because the transactions have a relationship to
    Q1 Data Set and this table has the relationship with the
    Cal_Trans table. What happens in this scenario is that whenever I set the rows to be FiscalMonthAbbr  the values it displays is the transactions based not on transaction date but date ordered. To explain further:
    If I have an order A with a DateOrdered of 01/01/2014, but the transaction of £100 for that order was made later on the 05/01/2014, that £100 is incorrectly attributed to the 01/01/2014.
    To clarify the type of table I am aiming for see the mock-up below, I however NEED the ability to filter this table using columns found in
    Q1 Data Set.
    How can I make a schema so that I can use both DateOrdered and TransactionDate? I cannot combine all three transaction tables into one because each transaction type has columns unique to that specific type.

    Thanks for your suggestions, at the moment I don't have time to prepare a non-confidential copy of the data model, however I've taken one step forward, and one step back!
    First to clarify; to calculate sales of each transaction type I have created the following measures (I've given them friendly names):
    rev_cash
    rev_online
    rev_phone
    I then have a measure called rev_total which sums together the above measures. This allows me to calculate total revenue, but also to break it down by transaction type.
    With this in mind I revised the schema based on Visakh original suggestion to look like this:
    Using this I was able to produce a table which looked like that below:
    There were two issues with this:
    If I add the individual measures for each transaction type I get no errors, as soon as I add the 'Total Sales' measure on the end of the table I get an error "Relationship between tables may be needed". Seemingly however the numbers still calculate as expected
    - what is causing this error and how do I remove it?
    I CAN with this scenario filter by 'phd' which is a column in the Q1 Data Set table
    and it works as expected. I cannot however filter by all columns in this table, an example would be 'word count'.
    'Word Count' is a integer column, each record in the Q1 Data Set table has a value set for this column.
    I would like to take the column above and add a new measure called 'Total Word Count' (which I have created) which will calculate the total number of words in that monthly period. When I add this however I get the same relationship error as above and it
    display the word count total for the entire source tbale for every row of the pivot table.
    How can I get this schema working so that I can filter by word count and other columns from the product table. It Is confusing me how I can filter by one column, but not by a another in the same table.
    Also, I don't fully understand how I would add a second date table or how it would help my issues.
    Thanks very much for you help.

  • Fact table design advice

    Good Morning All,
     I'm working on developing a cube that measure's budget and actual cost for a customer that I'm working with. We have serveral dimensions that comes into play:
    Organization - this dimension defines the various internal departments at the customer location where each department sets a budget or actual cost for each month.
    DateTimePeriod - this dimension defines the transaction date when the budget or actual cost was recorded. This dimension contains year, quarter, month and day columns.
    Expense Item - this dimension defines a specific expense item that a budget and actual cost is assigned too such as rent, utility, software licences,etc...
    Cost Type - this dimension defines if the cost within the fact table is a budget or actual cost.
    Within my fact table I store primary key fields values for each of the dimension table listed above. Included with this table is a cost column that represents the budget or actual cost. The problem that I'm having is....The budget cost and actual cost are seperate
    records...For example, I have one record that has the budget cost and then I have another record that has the actual cost....
    My feeling is that the budget and cost records should be store on same record instead of seperate records. Also I would like note we're using PerformancePoint to surface the cube data to the client and both the budget and cost needs to drill down to the
    month level only for phase 1. I have a feeling that the customer would want in the in future to measure down to the day level...
    So my question is....What is a better design:
    Keeping the actual and budget costs within a fact table on seperate rows using the Cost Type dimension to identify if the cost is a budget cost or an actual cost or....
    Keeping the actual and budget costs within a fact table on the same row and removing the need for a Cost Type dimension......
    Please help...
    Make sure you mark my reply as the answer if it had solved your request. Brandon M. Hunter MCTS - SharePoint 2010 Configuration

    Why? Wouldn't be easier if I make a database change and add the budget and actual cost on the same row??What would be the advantage or disadvantage to your approach???
    As per my experience, there could be more than one version of a budget value. Initially we start with budget for an account, and then have the actual for the same account for the same period. If we 100% sure that we get only these two versions (Budget and
    Actual) then upto a certain point, two columns implementation is fine. What if the budget is revised, how do you hold it, adding another column? What if you need to maintain a forecast value for same account, same period, create another column for that? Considering
    all accounting and budgeting scenario, I still suggest to have multiple rows for all these versions (or scenario, or cost type). Again, refer AdventureWorksDW for seeing this implementation.
    Since you are going to build a cube from this, you can easily view accounts recorded like this (which is mostly viewed by business users when analysing accounts). It is an advantage too.
    In terms of disadvantages, I see only one disadvantage which is storage cost.
    Dinesh Priyankara
    http://dinesql.blogspot.com/
    Please use Mark as answer (Or Propose as answer) or Vote as helpful if the post is useful.

  • Advice for placing rotated graphics in a table

    In order to help save some space, I would like to place 4 graphs in a 4 row x 2 column table and have the page in a landscape format (see attached PDF for the desired layout diagram).  What type of cell should be used for the figure titles to get the desired results: table cell, text frame, anchored frame, other?
    I tried a table cell but I do not know how to fix the row height.  Rotating the text was easy.  But it is the row height since I am making this landscape, right?  As I keep typing the figure title, the table row height just keeps growing.
    Perhaps I should not be trying to place the graphics and figure titles in a table.  My figures are wider than they are taller and that is why I want the overall page to be landscape.  I would like to do this multiple times within a book.

    Arnis Gubins wrote:
    Peter,
    My bad - you're right. I didn't check this out on FM before shooting off the answer. With FM8 & 9, it always rotates the pages automatically to the correct orientation (as far as I quickly tested). I was remembering a problem when printing envelopes and labels with FM6 where I had to always manually rotate the pages and the habit has kind of stuck in my head.
    I think the envelope template always was annoying because it assumed that the envelopes were only fed from the edge of the paper tray, so it lined up the envelope along the side. Some trays and manual feeders feed from the center, so that template's off by inches. Yes, I also remember something about the orientation being a fooler, which is why I learned to print to PDF or a sheet of used paper first, to remind me how it works.
    And yes, you remember correctly - in the Table Designer, the  Numbering option allows one to specify either Row First or Column First behaviour.
    Well, this is my bad - I had written it as the Paragraph Designer's Table Cell properties. There's good reason to cross these wired, viz. and to wit, the User Guide:
    Customize the vertical alignment of text in a cell
    1Click in the cell you want to customize.
    2Choose Format > Paragraphs > Designer to display the Paragraph Designer.
    3Choose Table Cell from the Properties pop-up menu and then choose Top, Middle, or Bottom from the Cell Vertical Alignment pop-up menu.
    4Click Apply To Selection.
    Note: The vertical alignment that you apply to a cell will persist even when you apply a different table format from the Table Designer.
    Specify the direction of autonumbering in a table
    When you insert a table, the table format determines the direction of autonumbering within cells—either across rows or down columns. This property also sets the direction of autonumbering for table footnotes.
    1Click in the table you want to change, and choose Table > Table Designer.
    2In the Basic properties of the Table Designer, choose Row First or Column First from the Numbering pop-up menu.
    3Click Apply To Selection.
    The subtle transition from the Paragraph Designer's Table Cell properties context to the Table Designer's Basic Numbering property context is pretty slippery, IMO.
    Also, the writing around the bug of persistent vertical text alignment in a cell when changing table formats, is quite sly. I've never seen this called out for a bug to be fixed, so it's probably not encountered often, if at all.
    Regards,
    Peter Gold
    KnowHow ProServices

  • Dynamicall​y Changing Table to File - Advice?

    My current setup for logging data selected by the user has been to a simple csv file with each read going to a new line.
    This has proved to be very annoying as the user must then reformat this file to their liking.  An example would be that the user requested a voltage and current at the same time and would like them to be in separate columns on the same row.
    Example:
    Before
    Time,Measurement,Value
    0:10:00,Voltage Measure,5.1
    0:10:00,Current Measure,.12
    0:20:00,Voltage Measure,4.2
    0:20:00,Current Measure,.23
    After
    Time,Voltage Measure,Current Measure
    0:10:00,5.1,.12
    0:20:00,4.2,.23
    and so on...
    So I have decided to allow the user to assign Measurements to a certain "column" in a table.  And for every measurment at that time (1 second intervals), it gets put into the apropriate column.  Then if a measurement comes at a different time, go to the next row and keep going.
    My issue is if there is an efficient way to go from a table to file??  I know there is ArrayToFile but didn't see any TableToArray or TableToFile (csv type) (may add that as a request).
    Code replies are appreciated since I would want this to be as efficient as possible.
    Dependent on people's replies would indicate how often I should flush to the output.  
    Thanks

    Never tested it against efficiency, but the easiest way of passing data table to a file is using the clipboard: the following code reads an entire table into the clipboard and creates a file with table content; columns are tab-separated, rows are ended with usual CRLF termination, empty cells are honoured; the file can be read next with notepad.
        int        avail, fH = 0;
        char    *txt = NULL;
            ClipboardPutTableVals (panel, control, VAL_TABLE_ENTIRE_RANGE);
            ClipboardGetText (&txt, &avail);
            if (avail) {
                fH = OpenFile ("table.txt", VAL_WRITE_ONLY, VAL_TRUNCATE, VAL_ASCII);
                WriteFile (fH, txt, strlen (txt));
                CloseFile (fH);
            if (txt) free (txt);
    Proud to use LW/CVI from 3.1 on.
    My contributions to the Developer Zone Community
    If I have helped you, why not giving me a kudos?

  • Partitioning a Wide Table on RAID5--Need Advice

    We will soon have to add a new table to our database that is 2 Gig in size and growing, so we are considering partitioning it for speed. I've a lot of Oracle experience, but haven't used partitioning before.
    NOTE:
    -- The database is on RAID5 disks
    -- The table is 70 fields wide so I'm only showing relevent fields
    -- We are still at Oracle 9i, unfortunately
    -- The data is re-loaded once per month; we never change it--SELECTS only
    Create table TRANSL1 (
    COMPANY_CD varchar2(5),
    POL_NO varchar2(10),
    COMMUNITY varchar2(6),
    ADDRESSKEY varchar2(25),
    STATE varchar2(2),
    FLOOD_ZONE varchar2(9),
    POST_FIRM varchar2(1) )
    The primary key will be company_cd + pol_no.
    I'm told the users usually pull up the data on one of the following:
    -- company_cd + pol_no
    -- state
    -- flood_zone
    -- addresskey
    -- post_firm
    -- company_cd by itself
    -- the first 2 characters of community
    -- they can also pull up on any other combination of fields they need
    After doing tons of reading on it, here's what I was thinking of doing.
    1. hash partition the table on company_cd
    2. local partition index on company_cd
    3. local partition index on pol_no
    4. regular index on state
    5. regular index on flood_zone
    6. regular index addresskey
    7. regular index on post_firm
    8. function index on community
    9. Create a normal primary key on company_cd + pol_no
    Does this look OK? Should the regular indexes be global partitioned instead? Should we even be using partitioning which so many indexes?

    It is true that partitioning your tables can help with 'speed'. That means different things to different folks. It can also impede things seriously if not properly administered. Partitioning is just another tool. If used incorrectly, the tool can cause the job to take longer.
    Just a note... you mention using RAID5. Oracle partitioning has more to do with logically partitioning your tables (although different partitions may or may not reside in the same tablespaces which in turn may or may not reside in the same location(s)) for ease of administration and improved performance but the use of RAID5 here is secondary. I mention this because I've seen a few associate Oracle Partitioning with disk partitioning.
    Anyways, look over the following doc:
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14223/parpart.htm#g1020112
    And a book that will throw much light on partitioning is Tom Kyte's book Expert Oracle Database Architecture.
    Hope this helps.

Maybe you are looking for