Ability to present table/spreadsheet of financial results

I have Xcelsius 4.5 and I am trying to present a table or spreadsheet view of financial results.  I am using Xcelsius to provide a GUI interface for users to select/vary model assumptions.  If I use the "grid" option, I get a warning about size (I would like to do 36 months across).  Same with the "table" option.  Any suggestions.
Thanks in advance.
Andrew

Using the grid option is typically reserved for a small series of numbers that you would like to modify during run-time. Try using the spreadsheet table control listed under Selectors, this tends to have less overhead. If the display is too large for one screen you can turn on horizontal/vertical scrollbars.
That being said, how many rows is your 36 month table? If it is more than 50 then this range will likely lead to slow performance. One way to split this up and reduce your overhead is by using vlookups to pull one section of data at a time (i would recommend splitting your 36 month table into fiscal/calendar years and pulling one at a time if you decide your tables are too big).
If you decide you need to see everything then you can still ignore the error and try to place that much data into a control and test out your solution if there are any resulting performance issues, it won't actually prevent you from doing this.
Hope this helps!

Similar Messages

  • Doc listing all OBIA RPD Projects and related Subject areas and Presentation tables.

    Hi All,
    Can anyone help with the Oracle document ID which gives the RPD Projects and related Subject areas only and Presentation tables.
    I tried using the documentation utlity but it is unable to give me result according to the RPD Projects.
    Thanks,
    Dev

    It is combination of different activities:
    Product guide, check here cool-bi.com
    RPD documentation based on prod guide.
    There might be a project called Project and try to import rpd contents into another then generate rpd doc.
    if helps mark

  • Choosing which table is used for results

    To make this simple I am going to use two dimension tables and one fact table.
    If you were to have two dimension tables. One for Country and one for Region.
    Then you have one fact table which shows transactions at the Region Level, but Country is also in the PK for the table. So there is a join from Country to F1 and another from Region to F1.
    When I select Country and Region in my query and no fact, it is using the fact table to get the results. The problem with this is that if there are no transactions for a given Region then the data does not exist. I would like the result to pull from the Country Dimension for Country and the Region Dimension for Region.
    Any help is greatly appreciated.

    Hi user10800227,
    So if I understand your situation correctly, you have the following going on:
    TABLES)
    COUNTRY_DIM
    REGION_DIM
    FACT
    JOINS)
    REGION_DIM.F1 1-to-many FACT.F1 many-to-one COUNTRY_DIM.F1 (INNER JOINS)
    In your report, you pull one field from COUNTRY_DIM and one field from REGION_DIM.
    Assuming that the above is correct, then what you are seeing is the right behavior. Each Answer Request is sent out as a single query. So when you pull a field from REGION_DIM and COUNTRY_DIM, OBIEE has to find a way to join those two tables together in a single query. Based on the metadata you setup in the RPD, OBIEE sees that the proper way to relate a REGION_DIM record to a COUNTRY_DIM record is through the FACT table. So when it generates SQL it joins REGION_DIM to the FACT, and the FACT to COUNTRY_DIM. Hence, if all your joins are inner joins, then if there are no transactions for a given COUNTRY&REGION combination, you won't see it in your list.
    If you want to see all the data points, you might want to make that join an outer join in the repository.
    Your other option is to use Nico's data densification technique. Visit this link for details: http://gerardnico.com/wiki/dat/obiee/logical_sql/obiee_sql_densification
    Good luck and if you found this post useful, please award points!
    Best regards,
    -Joe

  • Remove "Export Spreadsheet" button in result list for search on BP in WebUI

    Hi everybody,
    Could someone tell me how to remove the standard button "Export spreadsheet" from the result list of a search for Customer and Account in the WebUI?
    Is it possible to do this by using the transaction BSP_WD_CMPWB "BSP WD Component Workbench"?
    The component is CRM_UI_FRAME and I'm afraid that if I do some changes to this one, the modification will affect all othe transaction with search...
    I'm open to other solutions (if exists ;o)
    Thanks in advance,
    Luis.

    Hello Michael,
    First thing, thanks for your quick answer :o)
    It 's the first time that I use the BSP WD Component Workbench so I don't know what you are talking about, sorry :o)
    I understand that I have to put the parameter "downloadToExcel" to False but I don't know where it is.
    I looked in the tree node "View layout" of the component CRM_UI_FRAME.
    I didn't find the tag "chtmlb" in the code.
    Could you please explain me a little more about this (or just where to look ;o)
    Thank you very much,
    Luis.

  • Using an independant presentation table error

    Hi all
    I'm building a repository with The administration tool of OBIEE. the data source is an Oracle 10g datawarehouse created with OWB.
    I have a currency table wich is used just for reporting it doesn"t have any physical link to any table.
    I want to use it as a presentation table in my presentation model but when add it the business and presentation model I have the follwing error in the consistency check:
    *ERRORS:*
    *BUSINESS MODEL DWH:*
    *[nQSError: 15001] Could not load navigation space for subject area DWH.*
    *[15013] Logical table, DWH.DWH_CURRENCY_D, does not join to any other logical table*
    any one have an issue??
    Thanks

    HI..
    Business Model will not support one table to be present..
    There should present at least one dimension and one fact... so,
    You do one thing..
    Create a dummy table and create (say dummy dimension ) another dummy column with '1' as functionality...
    Make this column as Key for that logical table..
    Now you connect (complex join) from dummy dimension table to your fact (currency)... (1 to Many relationship)
    Now your error will not present.. and able to do consistent check for rpd..
    Try this.. and report back..
    Hope it works..
    ALL THE BEST
    Thanks & Regards
    Kishore Guggilla

  • Ordering presentation tables and columns in aswers

    Hi all
    I have a product dimension structured like :
    ProductDim
    All
    -BA
    ba name
    ba number
    -RA
    ra name
    ra number
    In answers the 'All' which is a presentation column is displayed after BA and RA which are presentation tables .Is there a way to change this order??
    Regards
    Adil

    No, unfortunately this is not possible. When opening a folder x, first all the underlying folders are shown, then the underlying columns (of folder x). The only workaround is to create a folder 'Folder All', holding a column 'Column All'.
    Regards,
    Stijn

  • Crash Table Spreadsheet

    The book suggests creating a crash table spreadsheet on excel. It’s benefit is that it enables opportunity to compare crashing cost of the tasks.
    Task Name, Duration and Cost columns are taken from MS Project and copied and pasted to Excel spreadsheet.
    Then Crash Reduction, Crash Cost and Cost Per Week Costs are added.
    Crash Reduction is crashing duration of the selected task.
    Cost Per Week is the cost of crashing per week. It is found by Crash Reduction / Crash Cost
    My question is that;
    Do we calculate Crash Costs manually?
    Do we calculate Crash Reductions manually? (So do we calculate how much time we can crash manually?)

    Hi Jack,
    I suppose you have talking about "The missing manual" from Bonnie.
    On page 353, you'll see that you have first to display the critical path from MS Project to Excel. Then add the column headers crash reduction, crash cost and cost per week. The crash costs and the crash reduction have to be calculated
    manually. Meaning that on your longest critical task, you have to
    estimate/evaluate the reduction factor of the task (reduction) and how much it will cost.
    Hope this helps,
    Guillaume Rouyre, MBA, MVP, P-Seller |

  • Is there a pre-installed application on the iMac that would allow me to crate documents, presentations and spreadsheets or do I have to buy iWork for this?

    Is there a pre-installed application on the iMac that would allow me to crate documents, presentations and spreadsheets or do I have to buy iWork for this?
    Thank you.

    No pre-installed app that will do all that, and yes, you could buy iWork - or you could download the free (although they do seek donations) LibreOffice: http://www.libreoffice.org/ or OpenOffice: http://www.openoffice.org/porting/mac/

  • Can you Easily Create Table/Spreadsheet in Acrobat XI Pro?

    Header 1
    Header 2
    Header 3
    I want to add table like this to Acrobat XI Pro Fillable Form.  Can it do it.  Am test driving this software and don't want to buy if it won't easily manage tables/spreadsheets that will easily add, delete, calculate, etc.

    Acrobat is not an editor, but a creator of PDFs. Minor edits are an option, but not easy to do.

  • JOIN ON 2 different sets of table depending on the result of first set

    <br>
    I have a query where it returns results. I want to join this query to
    2 different sets of table depending upon the first set has a result or not.
    if first set didnt had a results or records then check for the second set.
    SELECT 
    peo.email_address,
    r.segment1 requistion_num,
    to_char(l.line_num) line_num,
    v.vendor_name supplier, 
    p.CONCATENATED_SEGMENTS category,
    to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
    TO_CHAR(l.need_by_date,'MM/DD/YYYY') need_by_date,
    pe.full_name requestor,
    l.item_description,
    pr.segment1 project_num,
    t.task_number,
    c.segment1,
    c.segment2
    FROM po_requisition_headers_all r,
         po_requisition_lines_all l,  
    (SELECT project_id,task_id,code_combination_id, distribution_id,requisition_line_id,creation_date  FROM
    (SELECT project_id,task_id,code_combination_id,distribution_id,creation_date,requisition_line_id,ROW_NUMBER ()
    OVER (PARTITION BY requisition_line_id ORDER BY requisition_line_id,distribution_id ) rn
    FROM po_req_distributions_all pod) WHERE rn = 1) d,
    gl_code_combinations c,
    POR_CATEGORY_LOV_V p,
    per_people_v7 pe,
    PA_PROJECTS_ALL pr,
    PA_TASKS_ALL_V t,
    ap_vendors_v v,
    WHERE  d.creation_date >= nvl(to_date(:DATE_LAST_CHECKED,
    'DD-MON-YYYY HH24:MI:SS'),SYSDATE-1)
    AND
    l.requisition_header_id = r.requisition_header_id
    AND l.requisition_line_id = d.requisition_line_id
    AND d.code_combination_id = c.code_combination_id
    AND r.APPS_SOURCE_CODE = 'POR'
    AND l.category_id = p.category_id
    AND r.authorization_status IN ('IN PROCESS','PRE-APPROVED','APPROVED')
    AND l.to_person_id = pe.person_id
    AND pr.project_id(+) = d.project_id
    AND t.project_id(+) = d.project_id
    AND t.task_id(+) = d.task_id
    AND v.vendor_id(+) = l.vendor_id
    and r.requisition_header_id in(
    SELECT requisition_header_id FROM po_requisition_lines_all pl                    
    GROUP BY requisition_header_id HAVING SUM(nvl(pl.quantity,0) * nvl(pl.unit_price, 0)) >=100000)
    group by
    peo.email_address,
    r.REQUISITION_HEADER_ID,
    r.segment1 ,
    to_char(l.line_num) ,
    v.vendor_name, 
    p.CONCATENATED_SEGMENTS ,
    to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
    TO_CHAR(l.need_by_date,'MM/DD/YYYY') ,
    pe.full_name ,
    l.item_description,
    c.segment1,
    c.segment2,
    pr.segment1 ,
    t.task_number
    <b>I want to join this query with this first set </b>
    SELECT b.NAME, c.segment1 CO, c.segment2 CC,
              a.org_information2 Commodity_mgr,
              b.organization_id, p.email_address
         FROM hr_organization_information a, hr_all_organization_units b, pay_cost_allocation_keyflex c, per_people_v7 p
        WHERE a.org_information_context = 'Financial Approver Information'
          AND a.organization_id = b.organization_id
           AND b.COST_ALLOCATION_KEYFLEX_ID = c.COST_ALLOCATION_KEYFLEX_ID
           and a.ORG_INFORMATION2 = p.person_id
          AND NVL (b.date_to, SYSDATE + 1) >= SYSDATE
          AND b.date_from <= SYSDATE;
    <b>if this doesnt return any result then i need to join the query with the 2nd set</b>
    select lookup_code, meaning, v.attribute1 company, v.attribute2 cc,
                decode(v.attribute3,null,null,p1.employee_number || '-' || p1.full_name) sbu_controller,
                decode(v.attribute4,null,null,p2.employee_number || '-' || p2.full_name) commodity_mgr
                from fnd_lookup_values_vl v,
                per_people_v7 p1, per_people_v7 p2
                where lookup_type = 'BIO_FIN_APPROVER_INFO'
                  and v.attribute3 = p1.person_id(+)
                and v.attribute4 = p2.person_id(+)
                order by lookup_code
    How do i do it?
    [pre]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    <br>
    I have hard coded the 2 jon sets into one using UNION ALL but if one record exists in both sets how would i diferentiate between the 2 sets.
    COUNT(*) will only give the total records.
    if there r total 14
    suppose first set gives 12 records
    second set gives 4 records.
    But i want only 14 records which could 12 from set 1 and 2 from set 2 since set1  and set2 can have common records.
    SELECT 
    peo.email_address,
    r.segment1 requistion_num,
    to_char(l.line_num) line_num,
    v.vendor_name supplier, 
    p.CONCATENATED_SEGMENTS category,
    to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
    TO_CHAR(l.need_by_date,'MM/DD/YYYY') need_by_date,
    pe.full_name requestor,
    l.item_description,
    pr.segment1 project_num,
    t.task_number,
    c.segment1,
    c.segment2
    FROM po_requisition_headers_all r,
         po_requisition_lines_all l,  
    (SELECT project_id,task_id,code_combination_id, distribution_id,requisition_line_id,creation_date  FROM
    (SELECT project_id,task_id,code_combination_id,distribution_id,creation_date,requisition_line_id,ROW_NUMBER ()
    OVER (PARTITION BY requisition_line_id ORDER BY requisition_line_id,distribution_id ) rn
    FROM po_req_distributions_all pod) WHERE rn = 1) d,
    gl_code_combinations c,
    POR_CATEGORY_LOV_V p,
    per_people_v7 pe,
    PA_PROJECTS_ALL pr,
    PA_TASKS_ALL_V t,
    ap_vendors_v v,
    WHERE  d.creation_date >= nvl(to_date(:DATE_LAST_CHECKED,
    'DD-MON-YYYY HH24:MI:SS'),SYSDATE-1)
    AND
    l.requisition_header_id = r.requisition_header_id
    AND l.requisition_line_id = d.requisition_line_id
    AND d.code_combination_id = c.code_combination_id
    AND r.APPS_SOURCE_CODE = 'POR'
    AND l.category_id = p.category_id
    AND r.authorization_status IN ('IN PROCESS','PRE-APPROVED','APPROVED')
    AND l.to_person_id = pe.person_id
    AND pr.project_id(+) = d.project_id
    AND t.project_id(+) = d.project_id
    AND t.task_id(+) = d.task_id
    AND v.vendor_id(+) = l.vendor_id
    and r.requisition_header_id in(
    SELECT requisition_header_id FROM po_requisition_lines_all pl                    
    GROUP BY requisition_header_id HAVING SUM(nvl(pl.quantity,0) * nvl(pl.unit_price, 0)) >=100000)
    group by
    peo.email_address,
    r.REQUISITION_HEADER_ID,
    r.segment1 ,
    to_char(l.line_num) ,
    v.vendor_name, 
    p.CONCATENATED_SEGMENTS ,
    to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
    TO_CHAR(l.need_by_date,'MM/DD/YYYY') ,
    pe.full_name ,
    l.item_description,
    c.segment1,
    c.segment2,
    pr.segment1 ,
    t.task_number
    UNION ALL
    SELECT 
    r.segment1 requistion_num,
    to_char(l.line_num) line_num,
    v.vendor_name supplier, 
    p.CONCATENATED_SEGMENTS category,
    to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
    TO_CHAR(l.need_by_date,'MM/DD/YYYY') need_by_date,
    pe.full_name requestor,
    l.item_description,
    pr.segment1 project_num,
    t.task_number,
    c.segment1,
    c.segment2
    FROM po_requisition_headers_all r,
         po_requisition_lines_all l,  
    (SELECT project_id,task_id,code_combination_id, distribution_id,requisition_line_id,creation_date  FROM
    (SELECT project_id,task_id,code_combination_id,distribution_id,creation_date,requisition_line_id,ROW_NUMBER ()
    OVER (PARTITION BY requisition_line_id ORDER BY requisition_line_id,distribution_id ) rn
    FROM po_req_distributions_all pod) WHERE rn = 1) d,
    gl_code_combinations c,
    POR_CATEGORY_LOV_V p,
    per_people_v7 pe,
    PA_PROJECTS_ALL pr,
    PA_TASKS_ALL_V t,
    ap_vendors_v v,
    fnd_lookup_values_vl flv,
    per_people_v7 p1,
    per_people_v7 p2
    WHERE  d.creation_date >= nvl(to_date('11-APR-2008',
    'DD-MON-YYYY HH24:MI:SS'),SYSDATE-1)
    AND
    l.requisition_header_id = r.requisition_header_id
    AND l.requisition_line_id = d.requisition_line_id
    AND d.code_combination_id = c.code_combination_id
    AND r.APPS_SOURCE_CODE = 'POR'
    AND l.org_id = 141
    AND l.category_id = p.category_id
    AND r.authorization_status IN ('IN PROCESS','PRE-APPROVED','APPROVED')
    AND l.to_person_id = pe.person_id
    AND pr.project_id(+) = d.project_id
    AND t.project_id(+) = d.project_id
    AND t.task_id(+) = d.task_id
    AND v.vendor_id(+) = l.vendor_id
    AND flv.attribute1=c.segment1
    AND flv.attribute2=c.segment2
    AND flv.lookup_type = 'BIO_FIN_APPROVER_INFO'
    and flv.attribute3 = p1.person_id(+)
    and flv.attribute4 = p2.person_id(+)
    and r.requisition_header_id in(
    SELECT requisition_header_id FROM po_requisition_lines_all pl                    
    GROUP BY requisition_header_id HAVING SUM(nvl(pl.quantity,0) * nvl(pl.unit_price, 0)) >=100000)
    group by
    r.REQUISITION_HEADER_ID,
    r.segment1 ,
    to_char(l.line_num) ,
    v.vendor_name, 
    p.CONCATENATED_SEGMENTS ,
    to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
    TO_CHAR(l.need_by_date,'MM/DD/YYYY') ,
    pe.full_name ,
    l.item_description,
    c.segment1,
    c.segment2,
    pr.segment1 ,
    t.task_number

  • Recalling Spreadsheets and appending results into 1 file.

    Hi,
    i am in a stage of my program where i have to do numerical analysis on a set of data collected by an Optical spectrum analyser... the data has been saved in multiple spreadsheets... (each with a row for the wavelengths, x-values, and power, y-values)...i need to get the power values and put them all in 1 spreadsheet file, with the wavelngth values in the first row.
    so multiple 2D spreadsheets: Wavelength:  x1 x2 x3 .....
         Power 1     : P1 P2 P3....
         Wavelngth: x1 x2 x3...
         Power 2    : p1 p2 p3...
    Will turn to: Wavelength:  x1 x2 x3...
    Power11 P2 P3...
    Power21 p2 p3...
    attached below is my attempt at doing this, which doesnt seem to work. The loops are present so that 'read from spreadsheet' vi recalls all the spreadhseets...which it does do... but i don't know how to improve it so that all the files succesfully append into 1 2D array (as shown above).
    Please can someone help me figure out how to do this (recall multiple spreadsheets and then save into 1). right now only the files being recalled in the last loop iteration are saved into the single spreadsheet. 
    the file paths of the spreadsheets are all dependent on the number (N) so the file paths are such that 00.txt,01.txt,02.txt,03.txt...0N.txt, 10.txt,11.txt...1N.txt,20.txt,21.txt... etc ...NN.txt and teh final file will be RESULTS.txt (as seen in the pic)
    Any input will be greatly appreciated, it seems like an easy concept, but the answer totally evades me.  
    thanks a lot,
    Asiri  
    Attachments:
    Savespreadsheet.jpg ‏121 KB

    as you can see in the image, i have used the required VIs to recall and save the spreadsheets, what i am having problems with is appending ALL of the spreadsheets, my loops are not working properly, i only append a select number of spreadsheets/arrays together dependent on the loop iterations,
    perhaps its something wrong with the way i got everything wired? i index at the outer loop, however this information gets lots in subsequent iterations... what i need ideally is some sort of VI which inserts sub arrays into consequative rows in an array though just one input (such as in my loops).
    any suggestions are greatly appreciated.  
    thank you!
    asiri  
    Attachments:
    Savespreadsheet.jpg ‏121 KB

  • Sony Q1 2012 financial result

    Mobile Products & Communications (MP&C)
    The MP&C segment includes the Mobile Communications and Personal and Mobile Products categories.  Mobile Communications
    includes mobile phones; Personal and Mobile Products includes personal computers.  The supplemental pro forma financial information
    related to Sony Mobile is presented to enhance investors’ understanding of Sony’s operating results, is based on estimates and assumptions
    which Sony believes are reasonable, is not intended to represent or be indicative of what Sony’s operating results would have been had Sony
    Mobile been a wholly-owned subsidiary for the fiscal year ended March 31, 2012, and should not be taken as indicative of Sony’s future
    operating results.
    Sales increased 132.9% year-on-year (a 151% increase on a constant currency basis) to 285.6 billion yen (3,615
    million U.S. dollars).  This increase was primarily due to the consolidation of Sony Mobile, partially offset by
    lower sales of PCs mainly resulting from price declines.
    On a pro forma basis, had Sony Mobile been fully consolidated in the same quarter of the previous fiscal year,
    segment sales would have increased approximately 14%.  This increase was primarily due to higher average
    selling prices of mobile phones resulting from a shift to smartphones from feature phones, and higher unit sales of
    smartphones driven mainly by the strong performance of Xperia S and Xperia acro HD.
    Operating loss of 28.1 billion yen (356 million U.S. dollars) was recorded, compared to operating income of 1.6
    billion yen in the same quarter of the previous fiscal year.  This deterioration in segment results was due to the
    impact of the above-mentioned lower sales of PCs and the impact associated with the acquisition of Sony Mobile,
    which became a wholly-owned subsidiary, including incremental intangible asset amortization and certain royalty
    adjustments.
    The pro forma segment operating loss after the above-mentioned adjustment in the same quarter of the previous
    fiscal year was approximately 7.2 billion yen.  The deterioration in the operating results on a pro forma basis was
    primarily due to lower sales of PCs.

    Outlook for the Fiscal Year ending March 31, 2013
    MP&C
    Primarily due to the lowering of the annual unit sales forecast for PCs, sales are expected to be lower than the May
    forecast.  Due to the above-mentioned decrease in sales and the impact of unfavorable exchange rates, operating
    results are expected to be significantly below the May forecast.  Due to the consolidation of Sony Mobile, sales
    are expected to increase significantly year-on-year.  Operating results are expected to deteriorate significantly
    year-on-year primarily due to the large remeasurement gain recorded in the prior fiscal year for Sony Mobile.
    On a pro forma basis, had Sony Mobile been fully consolidated from the beginning of the previous fiscal year, a
    significant increase in sales and a significant improvement in operating results would be anticipated.
    http://www.sony.net/SonyInfo/IR/financial/fr/12q1_sony.pdf

  • Table to see Inspection result recording

    Mentors
    We have material where in there are intermittenat 60 operations to be recorded based on 3 stages of the meterial .All the operations have single characteristic value .Results are recored for the lots whcih are created based on the stage
    1) For stage new arrival result recodering is done for 10-25 operations
    2) Dispute stage result recodering is done for 31-46 operations
    3) Recycle stage result recodering is done for 50-66 operations
    now i want to know How to check in which of the operation , results are recorded in table ? ( Table sequence to derive values stored for the results for the operations )

    Hi,
    Check in the following tables.
    QAES  :                        Sample unit table
    QASR  :                        Sample results for inspection characteristic
    QAMV :                        Characteristic specifications for inspectic
    QAPP  :                          Inspection point.
    Regards
    Ravi Shankar.

  • How to clear UNB table in the payroll result?

    Hi,
    We are getting the payroll error- The gross wages do not cover the negative offset that has been forwarded. Therefore, no gross up is permitted while running the Grossup.
    I see the below table "UNB table" in the last payroll result. "UNB - Unbalance table used for tax retrocalculation" .
    I think we are getting the above error because of this UNB table, can anyone help me to how to clear this table?
    When we run the regular payroll, no tax is being deducted.
    Please help
    Saurabh

    Hi Arti,
    Thanks for your reply!!..
    Seeing your reply, it gives me bit confidence to crack the existing problem, however, I'm still not clear with your answer... let me explain you the problem-
    1. Employee was given 350,000 though Taxable Bonus WT with regular pay check
    2. Later it came to know- out of the above amount, 75000 was Grossup amount
    3. So in next payroll run, they entered -75000/- in regular Taxable bonus WT and +75000 in Grossup WT and also they deleted one IT210 record of GA tax authority which was wrongly created; and ran the payroll
    4. Since then, in next payroll period- No tax is being deducted, so we are creating IT221 infotypes with Tax wagetype
    5. If we are running the Grossup wage, payroll error- The gross wages do not cover the negative offset that has been forwarded; therefore, no grossup is permitted.
    While running the regular payroll, overpayment Wagetype is being generated.
    Now, I saw this UNB table is created and the below wagetype are there in the UNB table-
    WT /5UT amount 0.00
    WT 5430 amount -75000
    WT 0200 amount 12500 ( Monthly salary)
    WT 4530 amount 350000
    Now I am thinking, if we are deducting tax through IT221,if we clear this UNB table, our problem might get resolved
    It will be a great help for me if you can tell-
    1. how to check, claims process is implemented or not?
    2. I have good HCM experiece but this claims process is very new to me- pls guide me how to do that
    Waiting for your reply, thanks in advance
    Saurabh Garg

  • How can I export a Oracle DB table or a Query Result to an mdb (MS Access) File ?

    Hi,
    I would like to export a particular table or a particular data subset (result from a SQL query) to a MDB file, I tried with Oracle SQL Developer but I cannot see mdb as a possible export option.
    How can I do to export data to a mdb file ? I'm currently using SQL Developer on GNU/Linux, can you suggest me a link or a tool to do such things ?
    I consider an acceptable solution also to use SQL Developer to export data in a certain format and then use another software to open that format and create an mdb. What do you suggest ?

    Open the (an) MS Access database and:
    1) Click on File -> Link tables -> Select ODBC Database(s)
    2) On popup window select (or create new) "Machine Data Source" -> Oracle Driver -> Select the TNS alias (service name)
    Good luck!

Maybe you are looking for