Report Complexity

I have a very complex report.
It contains information from the same table with different parameters.
It contains information from several different tables.
And it includes a table of data from one table with a summary of totals at the foot of the table.
In previous tools I have been able to use different functions to obtain the data I need and then output the data field by field onto my report.
Can Oracle Reports handle a complex report like this?
My attempt has been to create separate Report Blocks for each query. However this may create a real performance problem since the report could potemtially have 26 queries to get all the required data.
Is this the right way to go about this?
Is there a better way?
One option I was considering was having a set of database beans that populated a report bean. The report bean members could then be used in the JSP. Is this one way to do this?
Thanks in advance,
Jeanne

hello,
oracle9i reports is certainly able to deal with this kind of complexity. the performance impacts can be within a very wide range. it mostly depends on the way you design your queries and support them with e.g. indexes in the database.
regards,
philipp

Similar Messages

  • Oracle Reports complex query

    Hi Friends,
    I am a newbie on oracle reports. I am ask to revise/debug a complex report.
    This is the partial source of the Oracle Report Query:
    select  mta.transaction_date,
            &P_acct_flex,     
            DECODE(mta.transaction_source_type_id,1,to_char(mmt.transaction_source_id),2,&P_so_flex,
                   mmt.transaction_source_name) Source
    from
           mtl_system_items            msi, 
           &C_from_cat 
           &C_from_type
           &C_source_from
           mtl_material_transactions   mmt,
           gl_code_combinations        gcc,
           MTL_SALES_ORDERS            mkts,
           MTL_GENERIC_DISPOSITIONS    mdsp,
           GL_CODE_COMBINATIONS        gl,
           mtl_transaction_accounts    mta
    where   mta.transaction_id = mmt.transaction_id
           and  &P_where_acct
           and  &P_where_item
           and  &P_where_cat
                &C_where_cat
                &C_where_type
           and  msi.organization_id = :P_org_id
                &P_date_range
                &C_where_value
                &C_where_subinv
                &C_where_reason
                &C_source_where
    /I want to test first the reports in the SQL*Plus environment but I find It hard to substitute lots of variables :(
    How do I substitute these variables and what data-type should I put in for
    My question is:
    1. Whats the difference of the variable &variable and :variable?, are they both parameter variables and need input from parameter screen during runtime?
    2. From the last part of the query
           and  msi.organization_id = :P_org_id
                &P_date_range
                &C_where_value
                &C_where_subinv
                &C_where_reason
                &C_source_whereHow do I substitute the variables here? I am expecting some "operators" between them but there is none.
    Can I do the query as "and msi.oraganization_id = &P_org_id" instead of "and msi.oraganization_id = :P_org_id"?
    Please help...
    Thanks a lot
    msk
    Edited by: yxes2009 on Jan 11, 2010 8:07 PM

    Use Toad here if you can. It's easier to assign values for :p or &var.
    1. Whats the difference of the variable &variable and :variable?, are they both parameter variables and need input from parameter screen during runtime?DM answered it.
    2. From the last part of the queryand msi.organization_id = :P_org_id
    &P_date_range
    &C_where_value
    &C_where_subinv
    &C_where_reason
    &C_source_where
    How do I substitute the variables here?Keep in mind that the substitutes should make sense. For example :P_org_id can be 123 and &P_date_range can be 1=1 or 'and ... between .... and ...' etc.

  • Reports Complex in PFD on Cloud - URL friendly - Access SSH

    I am apex developer since version 1.6 with Oracle.
    I have been thinking in Cloud as solution, but I have several questions:
    1. Is possible to have a War File on Cloud for to server reports (sample JasperReports Integration) or we can to use BI for our reports and pdf complex??
    2. What about URL friendly? we can use a cname or a record on DNS server for do it?
    3. Another question? How can I do my Backup (a export file) is possible to have SSH access?
    Thanks

    Hi,
    Your question has been answered in the following post :
    http://social.msdn.microsoft.com/Forums/en-US/6c918448-3787-4895-93cb-ac3d501c9a75/errors-in-the-ole-db-providercould-not-connect-to-the-redirectorensure-that-the-sqlbrowser-service?forum=tfsreporting
    You can also refer :
    http://social.msdn.microsoft.com/Forums/en-US/665cbea3-a338-4fb1-9d0e-51d5e6aa4579/errors-in-the-ole-db-provider-could-not-connect-to-the-redirectorensure-that-sqlservices-browser?forum=sqlanalysisservices
    Regards,
    Bharath

  • Regarding Performance managment Report(Complex)

    Hi Experts,
    i am facing problem with performance management reoprt:FM:HRHAP_DOCUMENT_GET_DETIAL in this function module gettting tables are t_bod_elements, t_body_coumns,t_body_cell_note tables here relanships are not clear like foreign_type Foreign_id are getting from While checking conditon Row_IID = coloumn is it correct or not if i am wrong correct me and one more thing
    after getting Foregin_id,Foreing_type i want delivery method getting from table HRp5007  Foregin_id,Foreing_type  passing  we can  get the details but  security reasons they sugeest use Fm but i dont know which Fm suitable for this getting this data can you flow logic for this requirement.
    along with  i have get the details from 1042 like same valuese need to pass (Forien_type , Foregn_id), and from qualification acquired form 1001 by passing Foreign_type and Foreign_id , if i am wrong  just correct me .
    thanks in advance.
    Raju.

    Hi Experts,
    i am facing problem with performance management reoprt:FM:HRHAP_DOCUMENT_GET_DETIAL in this function module gettting tables are t_bod_elements, t_body_coumns,t_body_cell_note tables here relanships are not clear like foreign_type Foreign_id are getting from While checking conditon Row_IID = coloumn is it correct or not if i am wrong correct me and one more thing
    after getting Foregin_id,Foreing_type i want delivery method getting from table HRp5007  Foregin_id,Foreing_type  passing  we can  get the details but  security reasons they sugeest use Fm but i dont know which Fm suitable for this getting this data can you flow logic for this requirement.
    along with  i have get the details from 1042 like same valuese need to pass (Forien_type , Foregn_id), and from qualification acquired form 1001 by passing Foreign_type and Foreign_id , if i am wrong  just correct me .
    thanks in advance.
    Raju.

  • Convert table linked report into a Command version - Howto?

    How can I convert hundreds of reports from table linked into a Command version?
    One way to do is to take the SQL syntax off the report, add a Command with that Syntax and manually replace the database fields by the Command fields.
    I can not believe that is the right way to this. Is there no hidden (re-)mapping fields button or another undocumented feature?
    If this is the way to do it... when will there be a one-button-click solution. The underlaying data keeps the same, could not be that hard to build..?
    Without that basic functionality Crystal Reports is not option for us anymore.
    Backgroud: We do have hundreds of reports that are created by an ODBC (RDO) database (SQL Anywhere database), created by linking the tables and select the fields we like to use in the report. This works fine at least as we used the RDC method in our application. Because we are moving to .NET we could not use RDC anymore and have to use the .NET methods. However the .NET SDK is very limited. The only way to add more complex WHERE statements is to use the Command functionality of Crystal Reports.

    Hi Hans,
    Let me explain, Command Objects are sent directly to the DB server as is. CR does not alter the SQL at all. As you know in SQL there are minor differences between the various servers and the SQL syntax can vary from DB to DB. Because of this CR could not build in the logic to handle the vast number of potential mappings, and not jsut SQL types but also PC databases.
    To be able to map reports based on connections directly to the DB through a Command object the current versions of CR do not have the ability. Possibly the next version may have this feature though but we can't say for sure until it's released. Check our WEB site for new versions and there functionality.
    Next question about moving off of the RDC to .NET, you are correct. CR no longer allows you to modify the SQL statement directly. The RDC was actually supposed to stop you also but it did work.
    One possible solution is to use .NET and inProc RAS ( Report Application Server) and using record sets. Your app can get the SQL from the report, process the query and appending the info onto the WHERE clause as well as other filtering and then set the existing reports data source to the record set. As long as your data is less than 5000 rows then no performance issues, over 5K and you may start to see slow report processing. Depends on other info and report complexity etc.
    If this is an option for you please post your question to the Crystal .NET development forum. Also, you will need to use 12.x.2000.x versions of the CR .NET assemblies and not use the basic version 10.5 or 10.2 that comes with .NET 2008 and 2005.
    Thank you
    Don

  • Generating pdf report via concurrent manager and xml publisher

    I have report (rdf) which works fine in eBusiness Suite but I’d like to create translatable reports with reach layout (comparing with the Report Builder) based on the functionalities of the XML publisher. I’ve made all necessary steps (first change the output format for that report in concurrent manager from pdf to xml, submit the report to get xml file, which I’ve used in Microsoft Word Template Builder to make layout in rtf file, and create data and template in XML Publisher Administrator in eBusiness Suite).
    When I’ve submitted that report from concurrent manager, after the concurrent manage finished his job I got the warning message.
    Also the output file is not generated and in log file there is a message: One or more post-processing actions failed. Consult the OPP service log for details.
    Concurrent manager works fine, but there are problem when concurrent manager send the xml output of the report to XML Publisher which have to create pdf layout based on rtf template.
    I think there might be a problem with report complexity, while I’ve used layout formatting, api calls to eBusiness Suite, parameters, placeholders, attached library etc.
    If it is impossible to get new report based on xml publisher template, what is the purpose of xml publisher if it is so limited, while every eBusiness Suite Report is so complex.
    I hope there will be a person on this forum with the same problem, otherwise I’ll give up at all from XML Publisher, at least until it come a better version of that tool which it seems to be in early development phase and not ready for the market yet.

    All of the complex report logic is handled in your Oracle Report (rdf) file.
    If your Oracle report is working OK, then this report should be generating XML.
    After running your report, go to the View Requests window. Highlight your request, and click the Diagnostics button. Then click the 'View XML' button. If you see XML data, then you should be OK.
    When I faced a similar issue, it was because the '1 step' process didnt work for the particular report I was using. The '1 step' process doesn't work for every report. Which report are you using?
    If the report is submitted via a form (not the concurrent manager request form), then you probably need to use 2 steps to produce the desired output.
    For more information, see the following thread:
    Enhancment to avoid 2 step check printing
    HTH,
    Mark K

  • BUSINESS USER CAN'T OPEN THE REPORT

    Hello experts,
    One of our business users(branch manager) working outside the main office is having issues with one of our dashboard reports. What  I don’t understand is, when I open the same report here in my office everything works fine. We have tried to reboot the computer, checked his privileges so far nothing has worked. Below is the error message is getting,
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 46008] Query Failed: [nQSError: 46008] Internal error: File server\Query\Optimizer\Request\Src\SQRRqDumpout.cpp, line 2929.
    Does anyone ever encountered same issue? I will appreciate your help

    Hello again!
    What we need to do is to come up with some tests to isolate what the issue is. If they are the only user/branch having the issue, then that is somewhere to start. Lets try:
    1) Get the user to log on to their Dashboard as someone else and access the same report. This will help us find out if its a account issue.
    2) Get the user to log on to the Dashboard as their user and try accessing the data from a different branch. This will help us find out if its a data issue.
    Additionally, could you help answer a few other questions?
    1) What version of OBIEE are you using?
    2) What OS?
    3) Whats the Data Source?
    4) Is the report complex? Does it have any long calculations in it? - If you could send the request XML that would be great.
    Thanks,
    Chris

  • BO XI RAS performance issue

    Hi all,
    We have just migrated our Java based application from Crystal Report 9 Report Application Server to BO Enterprise 3.1.
    We imported the existing CR9 report into Crystal Report 2008 and have it saved to BOE 3.1 repository.
    However, we observed the time required to generate a particular report is much longer than the existing production system.
    The same report can be generated in the production environment within 10 seconds.
    However, the same report at the new BOE 3.1 platform will run over 40 seconds.
    The said report is frequently generated by users and the lengthed processing time from 10 seconds to 40 seconds are not acceptable.
    We then added the "-trace" option in the BO RAS server and try to see what goes wrong.
    We observe the particular delay in the trace log as follow:
    TraceLog 2010 11 5 18:40:01.616 3572 6068 (Administrator:60) (..\cdtsagent.cpp:3117): doOneRequest saRequestId_getPromptConnInfos in
    TraceLog 2010 11 5 18:40:01.616 3572 6068 (Administrator:60) (..\cdtsagent.cpp:658): JobSharing: getMatchingPromptingState: begin
    TraceLog 2010 11 5 18:40:01.616 3572 6068 (Administrator:60) (..\cdtsagent.cpp:666): JobSharing: getMatchingPromptingState: job is bound, or metadata not retrieved. Do not search for a prompt state.
    TraceLog 2010 11 5 18:40:01.632 3572 6068 (Administrator:60) (..\cdtsagent.cpp:858): JobSharing: getMatchingReportHandler: begin.
    TraceLog 2010 11 5 18:40:01.632 3572 6068 (Administrator:60) (..\cdtsagent.cpp:866): JobSharing: getMatchingReportHandler: job bound, null doc, or job locked and not a write action. returning.
    TraceLog 2010 11 5 18:40:15.194 3572 6068 (Administrator:60) (..\cdtsagent.cpp:658): JobSharing: getMatchingPromptingState: begin
    TraceLog 2010 11 5 18:40:15.210 3572 6068 (Administrator:60) (..\cdtsagent.cpp:666): JobSharing: getMatchingPromptingState: job is bound, or metadata not retrieved. Do not search for a prompt state.
    There seems the RAS server is busy processing something from 18:40:01 to 18:40:15.
    We have been playing around different server parameters and the report options to try to see how to improve the performance.
    It seems we got stuck at this point and we have no concrete direction on how to shorten the report processing time.
    Although the particular report has large number of subreports (52 subreport objects),
    we don't expect the performance of BO XI is much worse than CR9 RAS.
    Any help / suggestions are much appreciated.
    Best regards,
    Ivan
    Edited by: Ivan Wong on Nov 16, 2010 4:50 PM

    Hello Ted Ueda,
    The report are generated via our web application using Java SDK.
    Is there any reference available such that we can use the Crystal Report Page server to complete the similar task?
    In the mean time, we have already try to reduce the report complexity and able to cut down the report generation time to 20s.
    However, it is still way too long for the user to accept.
    Base on the trace log, I am also thinking RAS is trying to look for a cache report.
    Do you aware if I can force the system not to do cache in order to try to speed up the processing?
    I don't see any report options that can force the caching option.
    Thanks and regards,
    Ivan

  • Need help printing pagefooter conditionally based on page number

    In the layout I'm working on, there is a <?call:footer?> in footer section. I want to avoid call it for certain pages. Eg. not print it on the first page.
    Is there a page-variable available that can be used to conditionally test and print this? (in example below find something replacing "currentpagevariable")
    Eg. <?if:currentpagevariable!='1'?><?call:footer?><?end if?>
    Note, I've tried to use the "Different first page" option, but due to reports complexity in report header, cannot use this option.
    Any suggestions as soon as possible would be much appreciated.
    /Thanks, Dan

    this is not possible.

  • DBase Driver problems

    I have been using Crystal for about 15 years, mostly producing reports from FoxPro tables.  Over the years, the drivers would periodically get broken and eventually fixed. 
    The application was stuck on 8.5 for years for a couple of reasons.  IT WORKED, and Crystal 9+ changes required application changes.  With Crystal 2008, I finally convinced the client we had to change.
    Most of the simple reports ported successfully with no significant changes.  However, the same cannot be said for the complex reports.  200 page reports that used to take about a minute to produce now fail with 'Unexpected error' or 'memory error' after 10-15 minutes, or they may actually work sometimes after several minutes.  Forget about producing reports that are 2000 pages!
    The reports have been rewritten to work in smaller pieces, and through the viewer I am able to simulate paging by swapping files between 'groups' of pages.  This seemingly works, but the errors pop up again when trying to export the 'groups' to PDF files. 
    My question is this:  Does anyone know of older xbase drivers that work better?  I've noticed posts that suggest there are ones out there that may be better.  I am using crdb_p2bxbse 12.0.0.683.
    At some point in my testing I've tried modifying the reports to use the ADO drivers instead.  This was somewhat successful, but not overwhelmingly.
    I'm about to downgrade back to working code (Crystal 8.5 based), but don't really want to do that.  Any other suggestions

    Hi Steven,
    That's a serious amount of memory leaking. Go to Microsofts WEB site and download a tool called : DebugDiag.
    Run it and set it for Memory leaks and Handles and select your EXE. When the app crashes Debug will generate a dump file. You can then open the dump file using this same tool. It should show where the memory leak is and in which component.
    Next we need to know what reporting engine are you using? RDC, RAS, CRPE32 or OCX etc.?
    If it does not or it's not obvious where the leak is I suggest you purchase a support incident and have a dedicated engineer take the case. If you do add a note and have the case assigned to me, I have access to all our source and drill into where the issue may be. If it's an issue I can track it to get resolved.
    Have you tried a simple report to see if it also has issues? I'm curious if it's report complexity or the amount of data being returned.
    Thank you
    Don

  • It seems hybrid database

    Hi all :-D,
    I´ve designed a OLTP base with all normalized tables.
    But there are tables that contains millions of rows.
    The reports of the application were simple, but
    now the customer needs complex reports (complex selects)
    with many tables with millions of rows to analyze the result
    in the web application, returning millions rows.
    This base seems a hybrid database.
    What´s the best way to design this?
    I thought to create another schema with design only star schema
    and to create materialized views to load data from oltp schema
    to this schema.
    Today, The application user has synonyms to oltp schema. So,
    I thought to create other synonyms in this user linking to
    tables of star schema.
    What do you think about this design?
    Is it valid? am I´m triping? :-)
    In this case, what´s the best way?
    thank U very much!!!!

    Hmm nobody answered yet ;)
    OK If you have big box and much available resources you can of course create reports on normalized data model. Reports will be fully up to date, (as you have necessary resources!) nobody complains and everything is fantastic!
    Unfortunately most of the time users already complain about slowness of the system and additional reports will definitely take off the last CPU and IO unit :)
    So that depends on your needs.
    Of course if you are creating denormalized model for reports in the same database you'll anyway take away some resources from oltp users, unless you have overnight silence period and do both data load and report generation only overnight. Of course your aim is to make necessary resources for data load and report generation on denormalized model LESS than those necessary for report generation for normalized schema.
    Doing both OLTP and reports on the same database also makes some other challenges like you can set only one optimizer mode at instance level, so you have to remember alter it either for OLTP app or report app. Most probably you have now first_rows(_n), but for reports you'll need all_rows. Also some other parameters like workarea_size_policy could be different for OLTP and Warehouse apps.
    Speaking about app user that has synonyms to oltp schema. I suggest you also to create another app user for star schema, because oracle has built in features to manage load on the db - profiles and resource plans - but both of them work only in user level, so most probably you'll need to manage resources for oltp and reports app differently.
    <shameless note>
    You can look at my paper about searching in normalized schemas using materialized views http://www.gplivna.eu/papers/mat_views_search.htm
    if you need it ;)
    </shameless note>
    Gints Plivna
    http://www.gplivna.eu/

  • BAPI_GOODSMVT_CREATE and movements 701 702

    Moving the MI10 transaction to the scanner we wanted to use BAPI_GOODSMVT_CREATE to create the actual goods movement.
    Research in SCN indicates that this cannot be done. It does work for 711 and 712. Unfortunately, using 711 and 712 increases reporting complexity. (I guess that in my book, an inventory adjustment is an inventory adjustment)
    If I cannot use BAPI_GOODSMVT_CREATE then I will probably use FM MB_CREATE_INVENTORY_DIFFERENCE (used by MI10)
    My question is:
    Can I add MB11 as an allowable transaction for 701 and 702 in MM configuration OMJJ? MB11 is an allowable transaction for 711 and 712. Adding MB11 as as an allowable transaction for 701 and 702 allows 701 and 702 to be valid for BAPI_GOODSMVT_CREATE with good movement code 06
    Is anyone familiar enough with this to explain any ramifications?
    Thanks in advance

    Thanks Sathish
    I did some testing without issue but was not sure of any ramifications
    I will accept my testing and your experience

  • Unable to resolve the query

    Hi
    I have the following table.
    Complex No
    Complex Amount
    Child Invoice
    Invoice Amount
    Invoice Date
    123
    9000
    245
    2000
    1-Oct-13
    123
    456
    3000
    01-Nov-13
    123
    567
    4000
    31-Oct-13
    My requirement is to generate the following report
    Complex No
    Complex Amount
    Till Date
    Future Date
    123
    9000
    6000
    3000
    Where Till Date is the sum of Amount of Invoice Dated <= 31-OCT-13 and Future Dated is the sum of amount of invoice dated > 31-OCT-13.
    Can anyone help me with the query?

    with t as (
               select 123 complex_no,9000 complex_amount,245 child,2000 invoice_amount,date '2013-10-01' invoice_date from dual union all
               select 123,null,456,3000,date '2013-11-01' from dual union all
               select 123,null,567,4000,date '2013-10-31' from dual
    select  complex_no,
            sum(complex_amount) complex_amount,
            sum(case when invoice_date <= last_day(sysdate) then invoice_amount end) till_date,
            sum(case when invoice_date > last_day(sysdate) then invoice_amount end) future_date
      from  t
      group by complex_no
    COMPLEX_NO COMPLEX_AMOUNT  TILL_DATE FUTURE_DATE
           123           9000       6000        3000
    SCOTT@orcl >
    SY.

  • Give priority to an agent

    I have an IPCC 7 enterprise and I'd like to know if there's a way that I can give priority to an agent.
    I have 5 agents with the same skillgroup but I want that the calls go first to a specific agent and in case that agent is busy the call goes to another.
    I know that in CUCM I can make it with hunt groups, is there a way i can do the same in the IPCC script?
    Regards!

    We have 100 agents of as of now and 10 new agents are joining soon. We are looking for setup in which the call will come to 100 agents and the overflow calls should go to the new hired agents.
    I would not do it that way.
    The way I would do it would be to put the new hires in the new skill group (SG_new) and would start off the queuing part of your script with a Select (LAA) node and Skill Group (SG_new) node. The X port of the Select goes to your normal Queue to Skill Group (SG_100) with appropriate queuing.
    The advantage of this is a caller gets a new agent if one is available, but calls do not queue to the new agents. When a new agent is finished with the current call, there will not be a call queued up and waiting. The new agent will get the next new call, if they remain available. This prevents new agents from getting all flustered by calls queuing up, and coaching with regards to the call that just finished is easy. The new agents will never see calls in queue to them on that SG.
    Will this setup using making a separate skill group for these new 10 agents make reporting complex/not feasible.
    Your primary reporting object in IPCC is the Call Type.
    I would put a call type between the check port of the LAA node and the SG node to measure the new agents. Since agents have to be available to get through here, the ASA of this call type will always be zero (assuming auto answer) and the abandon rate will be zero (assuming no abandon ringing). So don't expect otherwise - but you will be counting the calls they get.
    The other call type should be set on the X port of the LAA to count the calls the SG_100 get and measure abandon rate and ASA.
    Skill Group reports on the two different skill groups are going to be useful for seeing what the talk time is, how many holds are placed, how many are transferred. The two skill groups are going to provide different stats - of course. I see no issues here.
    (I was typing this while David was replying. As you can see, there are different approaches. I am always mindful of call type reporting, which influences by approach).
    Regards,
    Geoff

  • Simplel complex report query

    hi guru's
    i had prepared two complex reports seperately having the same Selection-screen , internal tables and declerations...now i have to combine both the reports into one single report....based upon <b>one field (i.e, filed PROCESS_TYPE)</b> of Selection-criteria(i.e, S_PR_TYP ) i have to display 2 outputs..One for SHC and another for CONF.....But the Logic  and Header display for the 2 output's is different   ........please let me know where should i write the logic ....and how the logic should be build....
    the code is as follows :
    <u><b>The code which is common for both the reports:</b></u>
    $$********************************************************************
    $$    TABLES DECLERATION
    $$********************************************************************
    TABLES: crmd_orderadm_h,
            crmd_orderadm_i,
            bbp_pdigp.
    $$********************************************************************
    $$  TYPE-POOLS
    $$********************************************************************
    TYPE-POOLS: slis, list.
    $$********************************************************************
    $$    GLOBAL TYPES
    $$********************************************************************
    TYPES: BEGIN OF y_str1,
           CLIENT         TYPE CRMD_ORDERADM_H-CLIENT,
           guid           TYPE crmd_orderadm_h-guid,
           object_id      TYPE crmd_orderadm_h-object_id,
           object_type    TYPE crmd_orderadm_h-object_type,
           process_type   TYPE crmd_orderadm_h-process_type,
           created_at     TYPE crmd_orderadm_h-created_at,
           changed_at     TYPE crmd_orderadm_h-changed_at,
           archiving_flag TYPE crmd_orderadm_h-archiving_flag,
           deliv_date     TYPE bbp_pdigp-deliv_date,
           final_entry    TYPE bbp_pdigp-final_entry,
           del_ind        TYPE bbp_pdigp-del_ind,
           END OF y_str1.
    TYPES: BEGIN OF y_str2,
           guid1          TYPE crmd_orderadm_h-guid,
           object_id      TYPE crmd_orderadm_h-object_id,
           object_type    TYPE crmd_orderadm_h-object_type,
           process_type   TYPE crmd_orderadm_h-process_type,
           created_at     TYPE crmd_orderadm_h-created_at,
           changed_at     TYPE crmd_orderadm_h-changed_at,
           archiving_flag TYPE crmd_orderadm_h-archiving_flag,
           guid2          TYPE crmd_orderadm_i-guid,
           header         TYPE crmd_orderadm_i-header,
           guid3          TYPE bbp_pdigp-guid,
           deliv_date     TYPE bbp_pdigp-deliv_date,
           final_entry    TYPE bbp_pdigp-final_entry,
           del_ind        TYPE bbp_pdigp-del_ind,
           END OF y_str2.
    $$********************************************************************
    $$    GLOBAL CONSTANTS
    $$********************************************************************
    CONSTANTS: C_BLANK_F(1)           TYPE C VALUE 'X',
               C_DEL_IND_F(1)         TYPE C VALUE 'X',
               C_ARCHIVING_FLAG(1)    TYPE C VALUE 'X',
               C_FINAL_ENTRY_F(1)     TYPE C VALUE 'X',
               C_FINAL_ENTRY_SPACE(1) TYPE C VALUE ' ',
               C_CBA_SPACE(1)         TYPE C VALUE ' ',
               C_DEL_SPACE(1)         TYPE C VALUE ' '.
    $$********************************************************************
    $$    Global Elementary Variables
    $$********************************************************************
    DATA: w_ld_lines                 TYPE i,
          w_ld_linesc(10)            TYPE c,
          w_del_ind                  TYPE c,
          w_final_entry              TYPE c,
          w_COUNT_cba                TYPE I VALUE 0,
          w_count_f                  TYPE I VALUE 0,
          W_BLANK_F                  TYPE C,
          W_FINAL_ENTRY_F            TYPE C,
          W_DEL_COUNT                TYPE I VALUE 0,
          W_PER_CBA1                 TYPE P decimals 3,
          W_PER_CBA                  TYPE P decimals 2,
          W_PER_E_LINE               TYPE I,
          W_N                        TYPE I.
    $$********************************************************************
    $$    GLOBAL INTERNAL TABLES (custom structure)
    $$********************************************************************
    DATA:   t_str_sc1 TYPE STANDARD TABLE OF y_str1 INITIAL SIZE 1.
    DATA:   t_str_sc2 TYPE STANDARD TABLE OF y_str2 INITIAL SIZE 1.
    DATA:   t_header      TYPE   slis_t_listheader,
            w_header      TYPE   slis_listheader,
            e_line        LIKE   w_header-info.
    DATA:   v_index       LIKE SY-TABIX.
            v_index = '1'.
    $$********************************************************************
    $$    GLOBAL FIELD-SYMBOLS
    $$********************************************************************
    FIELD-SYMBOLS: <FS_STR1> TYPE Y_STR1,
                   <FS_STR2> TYPE Y_STR2.
    $$********************************************************************
    $$    PARAMETERS & SELECT-OPTIONS
    $$********************************************************************
    SELECTION-SCREEN: BEGIN OF BLOCK b1 WITH FRAME TITLE text-001.
    SELECT-OPTIONS: s_scno   FOR crmd_orderadm_h-object_id,
                    s_pr_typ FOR crmd_orderadm_h-process_type NO INTERVALS NO DATABASE SELECTION NO-EXTENSION DEFAULT 'SHC',
                    s_change FOR crmd_orderadm_h-changed_at.
    SELECTION-SCREEN END OF BLOCK b1.
    $$********************************************************************
    $$    START-OF-SELECTION
    $$********************************************************************
    START-OF-SELECTION.
      REFRESH t_str_sc1.
      SELECT client
             guid
             object_id
             object_type
             process_type
             created_at
             changed_at
             archiving_flag
             FROM crmd_orderadm_h INTO TABLE t_str_sc1
             WHERE object_id IN s_scno AND  changed_at IN s_change AND process_type IN s_pr_typ.
      IF sy-subrc <> 0.
        MESSAGE I002.
      ENDIF.
      LOOP AT t_str_sc1 ASSIGNING <FS_STR1>.
        REFRESH t_str_sc2.
        SELECT a~guid
               a~object_id
               a~object_type
               a~process_type
               a~created_at
               a~changed_at
               a~archiving_flag
               b~guid
               b~header
               c~guid
               c~deliv_date
               c~final_entry
               c~del_ind
          INTO TABLE t_str_sc2
          FROM crmd_orderadm_h AS a INNER JOIN crmd_orderadm_i AS b
            ON aguid eq bheader INNER JOIN bbp_pdigp AS c
            ON bguid eq cguid
         WHERE a~guid eq <FS_STR1>-guid.
    <u><b>THE LOGIC FOR FIRST REPORT:</b></u>
    *"logic for displaying Delivery date at Header level
          SORT T_STR_SC2 BY DELIV_DATE.
          DESCRIBE TABLE T_STR_SC2 LINES W_N.
         READ TABLE T_STR_SC2 WITH KEY DELIV_DATE = T_STR_SC2-DELIV_DATE INTO <FS_STR2>-deliv_date.
          READ TABLE T_STR_SC2 INDEX v_index ASSIGNING <FS_STR2>.
          IF SY-SUBRC = 0.
            <FS_STR1>-deliv_date = <FS_STR2>-deliv_date.
            MODIFY  T_STR_SC1 FROM <FS_STR1> TRANSPORTING DELIV_DATE.
          ENDIF.
    *"Setting up the flags for the entire items in CRMD_ORDERADM_H as per the scenario
        LOOP AT T_STR_SC2 ASSIGNING <FS_STR2> WHERE HEADER EQ <FS_STR1>-GUID.
          IF <FS_STR2>-DEL_IND NE 'X'.
            IF <FS_STR2>-FINAL_ENTRY NE 'X'.
              W_BLANK_f = C_BLANK_F.
            ELSE.
              W_FINAL_ENTRY_F = C_FINAL_ENTRY_F.
            ENDIF.
          ENDIF.
        ENDLOOP.
    *"Logic started at item level
        LOOP AT T_STR_SC2 ASSIGNING <FS_STR2> WHERE HEADER EQ <FS_STR1>-GUID.
          IF W_BLANK_F NE 'X'.
            IF W_FINAL_ENTRY_F NE 'X'.
    *" Displaying the status for Del 'X' , Final_entry ' ', Archive_flag 'X'.
              <FS_STR1>-DEL_IND = C_DEL_IND_F.
             W_DEL_COUNT = W_DEL_COUNT + 1.
              <FS_STR1>-FINAL_ENTRY = C_FINAL_ENTRY_SPACE.
              <FS_STR1>-ARCHIVING_FLAG = C_ARCHIVING_FLAG.
             w_COUNT_cba = w_COUNT_cba + 1.
              MODIFY  T_STR_SC1 FROM <FS_STR1> TRANSPORTING DEL_IND FINAL_ENTRY ARCHIVING_FLAG.
            ELSE.
    *" Displaying the status for Del ' ' , Final_entry 'X', Archive_flag 'X'.
              <FS_STR1>-FINAL_ENTRY = C_FINAL_ENTRY_F.
             w_count_f  = w_count_f  + 1.
              <FS_STR1>-DEL_IND = C_DEL_SPACE.
              <FS_STR1>-ARCHIVING_FLAG = C_ARCHIVING_FLAG.
             w_COUNT_cba = w_COUNT_cba + 1.
              MODIFY  T_STR_SC1 FROM <FS_STR1> TRANSPORTING FINAL_ENTRY DEL_IND ARCHIVING_FLAG.
            ENDIF.
          ELSE.
    *" Displaying the status for Del ' ' , Final_entry ' ', Archive_flag ' '.
            <FS_STR1>-DEL_IND = C_DEL_SPACE.
            <FS_STR1>-FINAL_ENTRY = C_FINAL_ENTRY_SPACE.
            <FS_STR1>-ARCHIVING_FLAG = C_CBA_SPACE.
            MODIFY  T_STR_SC1 FROM <FS_STR1> TRANSPORTING DEL_IND FINAL_ENTRY ARCHIVING_FLAG .
          ENDIF.
        ENDLOOP.    "end of t_str_sc2
        if  <FS_STR1>-DEL_IND eq C_DEL_IND_F.
             W_DEL_COUNT = W_DEL_COUNT + 1.
        endif.
       if <FS_STR1>-FINAL_ENTRY eq C_FINAL_ENTRY_F.
                w_count_f  = w_count_f  + 1.
       endif.
      if  <FS_STR1>-ARCHIVING_FLAG eq C_ARCHIVING_FLAG.
               w_COUNT_cba = w_COUNT_cba + 1.
        endif.
        CLEAR: W_BLANK_F , W_FINAL_ENTRY_F.
    *"Logic ended at item level
      ENDLOOP.    "end of t_str_sc1
    *" when Transaction type is SHC
    IF <FS_STR1>-process_type EQ 'SHC'.
    DESCRIBE TABLE t_str_sc1 LINES w_ld_lines.
      w_ld_linesc = w_ld_lines.
      CONCATENATE ' TOTAL NO OF RECORDS SELECTED:' w_ld_linesc INTO e_line SEPARATED BY space.
    *" Percentage of Archived SC's
      W_PER_E_LINE = w_ld_lines.
      W_PER_CBA1 = W_COUNT_CBA / W_PER_E_LINE.
      W_PER_CBA = W_PER_CBA1 * 100.
    *" Displaying the total no of records fetched for DB
      FORMAT COLOR 7.
      WRITE:/9 e_line .
      WRITE:/10 'TOTAL NO OF FINAL ENTRIES SELECTED:',        w_count_f.
      WRITE:/10 'TOTAL NO OF DELETE ENTRIES SELECTED:',       W_DEL_COUNT.
      WRITE:/10 'TOTAL NO OF ENTRIES SELECTED FOR ARCHIVING:',w_COUNT_cba.
      SKIP.
      WRITE:/10 'PERCENTAGE OF CAN BE ARCHIVED:',W_PER_CBA,'%'.
      FORMAT COLOR 3.
      SKIP.
      WRITE:/30 '#### SC HAVING FINAL ENTRY INDICATOR FOR ALL ITEM IN SRM #####'.
      FORMAT COLOR OFF.
      WRITE:/30(63) SY-ULINE.
      ULINE.
    *" Displaying Headings for the Report
    NEW-LINE SCROLLING.
      WRITE:/3        'Transaction No',                          18  sy-vline,
             19       'Transaction Type',                        36  sy-vline,
             37       'Business Trans.Cat',                      56  sy-vline,
             57       'Created On',                              68  sy-vline,
             69(10)   'Changed On',                              84  sy-vline,
             85       'Delivery date',                           99  sy-vline,
            100       'Final Entry Ind',                        115  sy-vline,
            116       'Deletion Ind',                           129  sy-vline,
            130       'Can be Archived',                        146  sy-vline.
    SET LEFT SCROLL-BOUNDARY COLUMN 19.
      ULINE.
    $$********************************************************************
    $$    DISPLAY DATA AT HEADER LEVEL FOR SHC
    $$********************************************************************
    *" Sort the SC in Sequence
      SORT t_str_sc1 BY object_id.
      IF SY-SUBRC = 0.
    *" Displaying the Report at Header level
        LOOP AT t_str_sc1 ASSIGNING <FS_STR1>.
          IF NOT <FS_STR1>-archiving_flag IS INITIAL.
            FORMAT COLOR 7.
          ELSE.
            FORMAT COLOR 3.
          ENDIF.
      WRITE:/3        <FS_STR1>-object_id,                       18  sy-vline,
             19       <FS_STR1>-process_type,                    36  sy-vline,
             37       <FS_STR1>-object_type,                     56  sy-vline,
             57       <FS_STR1>-created_at,                      68  sy-vline,
             69(10)   <FS_STR1>-changed_at,                      84  sy-vline,
             85       <FS_STR1>-deliv_date,                      99  sy-vline,
            100       <FS_STR1>-final_entry,                    115  sy-vline,
            116       <FS_STR1>-del_ind,                        129  sy-vline,
            130       <FS_STR1>-archiving_flag,                 146  sy-vline.
        ENDLOOP.   "end of t_str_sc1 displaying at header level
      ENDIF.    "End of SY-SUBRC
    *ENDCASE.
    ENDIF.   "End of displaying Transaction type as SHC
    *" when Transaction type is CONF
    IF <FS_STR1>-process_type EQ 'CONF'.
      DESCRIBE TABLE t_str_sc1 LINES w_ld_lines.
      w_ld_linesc = w_ld_lines.
      CONCATENATE ' TOTAL NO OF RECORDS SELECTED:' w_ld_linesc INTO e_line SEPARATED BY space.
    *" Percentage of Archived SC's
      W_PER_E_LINE = w_ld_lines.
      W_PER_CBA1 = W_COUNT_CBA / W_PER_E_LINE.
      W_PER_CBA = W_PER_CBA1 * 100.
    *" Displaying Headings for the Report
    *" Displaying the total no of records fetched for DB
      FORMAT COLOR 7.
      WRITE:/9 e_line .
    WRITE:/10 'TOTAL NO OF FINAL ENTRIES SELECTED:',        w_count_f.
      WRITE:/10 'TOTAL NO OF DELETE ENTRIES SELECTED:',       W_DEL_COUNT.
      WRITE:/10 'TOTAL NO OF ENTRIES SELECTED FOR ARCHIVING:',w_COUNT_cba.
      SKIP.
      WRITE:/10 'PERCENTAGE OF CAN BE ARCHIVED:',W_PER_CBA,'%'.
      FORMAT COLOR 3.
      SKIP.
      WRITE:/30 '#### SC HAVING FINAL ENTRY INDICATOR FOR ALL ITEM IN SRM #####'.
      FORMAT COLOR OFF.
      WRITE:/30(63) SY-ULINE.
      ULINE.
    NEW-LINE SCROLLING.
      WRITE:/3        'Transaction No',                          18  sy-vline,
             19       'Transaction Type',                        36  sy-vline,
             37       'Business Trans.Cat',                      56  sy-vline,
             57       'Created On',                              68  sy-vline,
             69(10)   'Changed On',                              84  sy-vline,
             85       'Delivery date',                           99  sy-vline,
           100       'Final Entry Ind',                        115  sy-vline,
            100       'Deletion Ind',                           112  sy-vline,
            113       'Can be Archived',                        129  sy-vline.
    SET LEFT SCROLL-BOUNDARY COLUMN 19.
      ULINE.
    *$$********************************************************************
    *$$    DISPLAY DATA AT HEADER LEVEL
    *$$********************************************************************
    *" Sort the SC in Sequence
      SORT t_str_sc1 BY object_id.
      IF SY-SUBRC = 0.
    *" Displaying the Report at Header level
        LOOP AT t_str_sc1 ASSIGNING <FS_STR1>.
          IF NOT <FS_STR1>-archiving_flag IS INITIAL.
            FORMAT COLOR 7.
          ELSE.
            FORMAT COLOR 3.
          ENDIF.
      WRITE:/3        <FS_STR1>-object_id,                       18  sy-vline,
             19       <FS_STR1>-process_type,                    36  sy-vline,
             37       <FS_STR1>-object_type,                     56  sy-vline,
             57       <FS_STR1>-created_at,                      68  sy-vline,
             69(10)   <FS_STR1>-changed_at,                      84  sy-vline,
             85       <FS_STR1>-deliv_date,                      99  sy-vline,
           100       <FS_STR1>-final_entry,                    115  sy-vline,
            100       <FS_STR1>-del_ind,                        112  sy-vline,
            113       <FS_STR1>-archiving_flag,                 129  sy-vline.
        ENDLOOP.   "end of t_str_sc1 displaying
      ENDIF.  "End of SY-SUBRC
    ENDIF.  "End of displaying Transaction type as CONF
    <b><u>
    THE LOGIC FOR THE SECOND REPORT</u></b>
    LOOP AT t_str_sc1 ASSIGNING <FS_STR1>.
          REFRESH t_str_sc2.
          SELECT a~guid
                 a~object_id
                 a~object_type
                 a~process_type
                 a~created_at
                 a~changed_at
                 a~archiving_flag
                 b~guid
                 b~header
                 c~guid
                 c~deliv_date
                 c~final_entry
                 c~del_ind
            INTO TABLE t_str_sc2
            FROM crmd_orderadm_h AS a INNER JOIN crmd_orderadm_i AS b
              ON aguid eq bheader INNER JOIN bbp_pdigp AS c
              ON bguid eq cguid
           WHERE a~guid eq <FS_STR1>-guid.
           IF NOT t_str_sc2[] is INITIAL.
          LOOP AT T_STR_SC2 ASSIGNING <FS_STR2>.
           IF <FS_STR2>-DEL_IND NE C_DEL_SPACE.        " if x
              <FS_STR2>-DEL_IND = C_DEL_IND_F.
              <FS_STR2>-ARCHIVING_FLAG = C_ARCHIVING_FLAG.
              MODIFY T_STR_SC2 FROM <FS_STR2> .
            ELSE.         "if ' '
              EXIT.
            ENDIF.
          ENDLOOP.    "End loop of t_str_sc2
          MOVE <FS_STR2>-DEL_IND TO <FS_STR1>-DEL_IND.
          MOVE <FS_STR2>-ARCHIVING_FLAG TO <FS_STR1>-ARCHIVING_FLAG.
          MODIFY T_STR_SC1 FROM <FS_STR1>.
        ELSE.   " For sy-subrc
          <FS_STR1>-REMARKS = c_itnf.
          MODIFY T_STR_SC1 FROM <FS_STR1>.
        ENDIF.    " End of sy-subrc
      IF <FS_STR1>-DEL_IND eq C_DEL_IND_F.
             W_DEL_COUNT = W_DEL_COUNT + 1.
      ENDIF.
      IF  <FS_STR1>-ARCHIVING_FLAG eq C_ARCHIVING_FLAG.
            w_COUNT_cba = w_COUNT_cba + 1.
      ENDIF.
      ENDLOOP.  "End loop of t_str_sc1
    ********************************" when Transaction type is CONF
    *******************************IF <FS_STR1>-process_type EQ 'CONF'.
      DESCRIBE TABLE t_str_sc1 LINES w_ld_lines.
      w_ld_linesc = w_ld_lines.
      CONCATENATE ' TOTAL NO OF RECORDS SELECTED:' w_ld_linesc INTO e_line SEPARATED BY space.
    *" Percentage of Archived SC's
      W_PER_E_LINE = w_ld_lines.
      W_PER_CBA1 = W_COUNT_CBA / W_PER_E_LINE.
      W_PER_CBA = W_PER_CBA1 * 100.
    *" Displaying Headings for the Report
    *" Displaying the total no of records fetched for DB
      FORMAT COLOR 7.
      WRITE:/9 e_line .
    WRITE:/10 'TOTAL NO OF FINAL ENTRIES SELECTED:',        w_count_f.
      WRITE:/10 'TOTAL NO OF DELETE ENTRIES SELECTED:',       W_DEL_COUNT.
      WRITE:/10 'TOTAL NO OF ENTRIES SELECTED FOR ARCHIVING:',w_COUNT_cba.
      SKIP.
      WRITE:/10 'PERCENTAGE OF CAN BE ARCHIVED:',W_PER_CBA,'%'.
      FORMAT COLOR 3.
      SKIP.
      WRITE:/30 '#### SC HAVING FINAL ENTRY INDICATOR FOR ALL ITEM IN SRM #####'.
      FORMAT COLOR OFF.
      WRITE:/30(63) SY-ULINE.
      ULINE.
      NEW-LINE SCROLLING.
      WRITE:/3        'Transaction No',                          18  sy-vline,
             19       'Transaction Type',                        36  sy-vline,
             37       'Business Trans.Cat',                      56  sy-vline,
             57       'Created On',                              68  sy-vline,
             69(10)   'Changed On',                              84  sy-vline,
            100       'Deletion Ind',                           112  sy-vline,
            113       'Can be Archived',                        129  sy-vline,
            130       'Remarks',                                 150  sy-vline.
      SET LEFT SCROLL-BOUNDARY COLUMN 19.
      ULINE.
    *$$********************************************************************
    *$$    DISPLAY DATA AT HEADER LEVEL
    *$$********************************************************************
    *" Sort the SC in Sequence
      SORT t_str_sc1 BY object_id.
      IF SY-SUBRC = 0.
    *" Displaying the Report at Header level
        LOOP AT t_str_sc1 ASSIGNING <FS_STR1>.
          IF NOT <FS_STR1>-archiving_flag IS INITIAL.
            FORMAT COLOR 7.
          ELSE.
            FORMAT COLOR 3.
          ENDIF.
          WRITE:/3        <FS_STR1>-object_id,                       18  sy-vline,
                 19       <FS_STR1>-process_type,                    36  sy-vline,
                 37       <FS_STR1>-object_type,                     56  sy-vline,
                 57       <FS_STR1>-created_at,                      68  sy-vline,
                 69(10)   <FS_STR1>-changed_at,                      84  sy-vline,
                100       <FS_STR1>-del_ind,                        112  sy-vline,
                113       <FS_STR1>-archiving_flag,                 129  sy-vline,
                130       <FS_STR1>-REMARKS,                        150  sy-vline.
        ENDLOOP.   "end of t_str_sc1 displaying
      ENDIF.  "End of SY-SUBRC
    **********************ENDIF.  "End of displaying Transaction type as CONF

    Very difficult to give you a solution without having access to the actual data and tables and some basic relationship model to explain the entities of the tables.
    But one thing I found that makes dealing complex queries a lot easier - easier to code and to read and to maintain - is to use the WITH clause. This allows the type of modularisation of code that we're using in other languages.
    The basic syntax is:WITH <alias1> AS(
      SELECT ...
    <alias2> AS(
      SELECT ...
    <aliasn> AS(
      SELECT ...
    SELECT
    FROM alias1, .. aliasnThis allows you to create distinct query sets once - and then re-use these again in joins, selects, and even other sub-sets.
    The resulting SQL is a lot les stressful on the eye and makes the whole "processing logic" of getting to the results much easier to analyse, follow and understand.

Maybe you are looking for

  • Multiple Invoice Batches

    How do I create multiple invoice batches with a single submission of Payables open interface import.

  • Call functions in ABAP : CALL 'ThSysInfo'

    hi i am in need of certain information abt the blow code\ i just wanted to knw where the value HWID gets populated from after executing the below code can any body help plzzz CALL 'ThSysInfo'      ID 'OPCODE'   FIELD OPCODE_MSGSERVER      ID 'MSOPCOD

  • Why WiFi so bad when charging iPad 4?

    Why is it that when my iPad 4 (64 GB, wifi only) has a ****** Internet connection when the charging cable is in? It only bothered me THIS much, and I must say.. It's not like the lightning connected is some kind of Ethernet cable of some sort. I have

  • When i am on youtube there's no sound i have tried uninstalling and re-installing firefox but it has made no difference

    i have windows vista and my i have firefox 3.6.12.

  • CISCO ISE

    Hi all, I've been playing around with ISE demo and I am very impressed!!! After trying different scenarios with my co-workers I came to a point where we find it kind of buggy. I have rules to redirect unknown users to pasturing through web where they