Duplicate data on Report

Hi Friends
I am running the query of Infoset.On report,I am getting the duplicate data.
How can i resolve ti?
Regards,
Chama.

Hi,
Go through this link.
http://help.sap.com/saphelp_nw04/helpdata/en/ab/7a877cba29144b85d07f26f1f50cd8/frameset.htm
Are you using any time characteristics infoobject in your infoset?
if yes, restrict that infoobject.
Info Set
if it is useful assign points
Regards,
Senthil

Similar Messages

  • How to show duplicate data in reports

    Hi experts,
    I need to show duplicate data also in my reports.
    1. I have removed Distinct in advance tab in report.
    2. In rpd unchecked the distinct supported.
    still i am not getting the duplicate records. report is showing distinct data only...how can we resolve this...
    Thanks,
    F

    Hi,
    Is BI show
    jan-08 customer 001 amount 200
    or
    jan-08 customer 001 amount 100 ?
    What is the aggreagation for amount?
    If it is sum then it shoud be 200 for amount.
    is there a column in your table that is different for those 2 lines with jan-08 customer 001 amount 100 like an ID or something?
    if yes then add that id in you dimmension table and then in your report.
    In this case you will get those 2 lines.
    There is another way but I know it shoudn't be used. To set none for aggregation rule.
    Regards
    Nicolae

  • Duplicate data on Report of infoset

    Hi Friends
    I have created the inoset with 11 infoproviders.One of the infoprovider has the same contract number with different bond numbers.That's why it's repeating on query but i dont want to repeat those values.
    Is it possiblew if change the infoset joins.Currently it's left outer joins.
    Please advise.
    Regards,
    Chama.

    Hi,
    Join control 
    Definition
    An area of the screen belonging to the InfoSet Builder. The InfoProviders that are included in the join are displayed in the join control.
    Use
    You define join conditions in the join control. There have to be valid join conditions before the system is able to activate the InfoSet. For more information see Defining Join Conditions.
    To edit two InfoProviders from one InfoSet, you can call up a separate join control. You can find additional information under Editing InfoProviders in the Join Control.
    Structure
    Displaying an InfoProvider in the Join Control
    InfoProviders are displayed as a table in the join control. A  symbol in the header indicates that an InfoObject is time-dependent.
    Depending on the type of InfoProvider, the following information is displayed in the rows of the table:
    ·        for ODS objects: A field (a key field or a data field)
    ·        for InfoObjects: The InfoObject itself, compounded characteristics, or an attribute
    Since the fields for ODS objects and the attributes of InfoObjects are defined using InfoObjects, each row ends with an InfoObject.
    InfoObjects are described as follows in the columns of the table:
    Column
    Information
    Use field
    Field selection for an InfoSet: If there is a check mark in this checkbox, the corresponding field or attribute from an InfoProvider is released for use in reporting. This means that it is available in the BEx Query Designer to be used for defining queries.
    There is a check mark in this checkbox by default.
    You can restrict the number of available fields or attributes from an InfoProvider by removing this check mark.
    If an InfoObject has the property “exclusive attribute”, the checkbox for selecting this field object in the join control is not ready for input. This is because the respective characteristic can only be used as a display attribute for another characteristic.
    In the BEx Query Designer these display attributes are not available for the query definition in the InfoProvider directory tree (see Defining a New Query).
    In order to add these field objects in queries, you have to deactivate the property “attribute only” in the InfoObject maintenance. (See Tab: General) This can influence the performance of database access.
    Key field, additional field
    The symbol  means
    ·        a key field for ODS objects
    ·        the InfoObject itself or a compounded characteristic, for InfoObjects
    The symbol  means additional attributes, for time-dependent InfoObjects
    ·        for the start of a valid time interval (valid from)
    ·        for the end of a valid time interval (valid to)
    ·        and for all InfoProviders
    ·        Key dates
    Technical name
    Object type (represented by the corresponding symbol)
    Examples:
    Characteristic
    Key figure
    Unit
    Time-characteristic
    Description
    Long text description
    Checkbox for temporal operands
    This column is filled only for D-type (date) fields or attributes from an InfoProvider and for time-characteristics, from which a key date or a time interval is derived.
    If there is a checkmark in this checkbox, it means that the InfoObject is used as a temporal operand.
    The checkbox is empty by default. If there is a check mark in the checkbox, and a key date can be derived, the additional fields mentioned above are added to the InfoProvider.
    See Temporal Joins.
    The following functions are available from the context menu (right mouse-click) of an entry in a table:
    ·        Hide Time-Dependent Fields
    If you hide the time-dependent attributes that belong to a time-dependent InfoObject, only the attributes that are not time-dependent are used in reporting. In the following situation, the InfoObject is treated as a time-dependent InfoObject within the InfoSet: This can affect how the join is processed. For more information see Temporal Joins.
    ·        Delete Object
    You use this function to remove an object from the join control.
    ·        Left Outer Join or Inner Join
    For more information on the left outer-join operator, see Defining Join Conditions.
    ·        Select All Fields
    If you choose this option, all fields or attributes for an InfoProvider are released for reporting. The indicators are respectively set in the column Use Field.
    ·        Deselect All Fields
    If you choose this option, all indicators are removed from the column Use Field.
    Displaying Join Conditions in the Join Control
    A join condition is displayed as a line connecting exactly one InfoObject from a row in one object with exactly one InfoObject from a row in another object.
    For more information see Defining Join Conditions.
    Navigating in the Join Control
    Location of the individual objects
    The system inserts each object from a fixed, predetermined default size into the join control.
    If you want to insert a new object next to a specific table, select the table you want. The system inserts the new object at the same level, to the right of the selected table.
    If no table is selected, the system inserts the new object at the same level, to the right of the table furthest away on the right.
    You are able to position each ODS object and each InfoObject freely in the join control. Position the cursor over the header of the object that you want to move, press the left mouse-button, and keeping the button pressed down, drag the object to its new position.
    The positioning of the individual objects within the join control does not affect the way the join is processed.
    Size of the individual objects
    Each time you click on the Zoom in icon, the view is enlarged by 10%.
    Each time you click on the Zoom out icon, the view is reduced by 10%.
    The Auto-arrange function arranges the objects automatically into an overview.
    Navigator
    You click on the  Hide/Display Navigator function to get to the navigation help.
    These functions are also available from the context menu of the join control.
    The navigator is particularly useful if not all the objects are visible at the same time.
    If you want to change the section of the screen that is displayed, you move the red frame in the navigator.
    If you want to change the size of the objects, you adjust the dimensions of the frame itself:
    Reducing the frame has the same effect as the zoom-in function.
    Enlarging the frame has the same effect as the zoom-out function.
    You can also choose the functions Zoom in, Zoom out and Show/Hide Navigator in the context menu for the join control.
    Changing the descriptions
    The descriptive texts that are used in the metadata repository for the InfoProviders and their attributes are also used in the join control.
    If you use InfoProviders or InfoObjects more than once as attributes in the join, it helps if you change these descriptive texts for the purposes of the InfoSet. This enables you to identify the individual objects more easily.
    Choose the  Change Description function. An overview of all the texts is displayed beneath the join control.You are able to change each of these texts.
    The following functions are available:
    Function
    Information
    All Objects
    A selection of the texts for
    ·        a single InfoProvider in the join
    ·        all the objects in the join
    Install
    Transfers the texts in the display in the join control.
    Get All Original Texts
    Undoes the changes made to the texts.
    If you click on the  Transfer function at this stage, the system re-inserts the descriptions from the metadata repository.
    Delete
    Select one or more objects that you want to delete from the join and click on the  Delete function.
    Saving a join as a .jpg file
    Choose the  Save as jpg function to save your join definition as a graphic in the jpeg file format on a PC
    Editing InfoProviders in the Join Control
    Use
    In order to define join conditions between two InfoProviders, you can bring up a new display screen separately and edit there.
    To get an overview using the InfoProviders contained in the InfoSet, we recommend using the join control for the Change InfoSet screen. For this display a zoom factor of 50%, for example, would be suitable.
    To edit individual InfoProviders within the InfoSet, we recommend using the separate display of two InfoProviders in the join control for the Editing Selected Objects screen. For this display a zoom factor of 120%, for example, would be suitable.
    Prerequisites
    You have transferred the InfoProviders you want from the Change InfoSet screen into the join control. For more information see Editing InfoSets.
    Procedure
    You are in the Change InfoSet screen in the join control. Press the buttons CTRL + Shift, and select the two InfoProviders you want.
    Choose  Selected Objects. The Editing Selected Objects screen appears.  The system displays both InfoProviders in full size.
    Set or delete the join conditions you want.
    The following functions are available from the context menu (right mouse-click) of an entry in a table:
    Hide Time-Dependent Fields
    Left Outer Join or Inner Join
    Select All Fields
    Deselect All Fields
    The following editing functions are available by using buttons in the toolbar:
    Zoom in
    Zoom out
    Show/Hide Navigator
    Save as jpg
    Print
    For more information see Join-Control.
    Go back. You get to the Change InfoSet screen.
    Hareesh

  • Report is showing duplicate data due to update

    Hi All,
    I need to eliminate the duplicate data in report which is occurring due to the update(not correction) in employee assigment at oracle HRMS.I am already using Max effective start date in where clause at both tables like per_all_people_f and per_all_assignment_f
    Regards,
    Ssali

    I you get duplicate data, change your "select" to "select unique".
    Maybe this is a specific Oracle EBusiness Suite thing. If so, ask it in the EBusiness Suite forum.

  • SQL Developer Data Modeler  "Reports"  Duplicate Columns in Index- Bug

    I am using SQL Data Modeler of SQL Developer - 3.2.20.09
    Problem Description:
    In the "Indexes" section of report generated for a table, column appear twice.
    Steps to Reproduce:
    1. Create an Entity in Modeler: COMPANY, Add attributes COMP_ID and COMP_DESC to it, mark COMP_ID as PK
    2. Engineer it to Relational Model
    3. Go to 'File->Data Modeler->Reports'
    4. Select 'Available reports' = 'Table' and press 'Generate Report'
    5. View the report.
    Indexes shown for the table contains duplicate columns for COMP_ID
    Regards,

    Hi
    I've just tested with SQL Developer 3.2.20.09 and built in Data Modeler 3.1.4.710 with Data Modeler reports 11.2.0.09.87
    There is no duplicate columns for COMP_ID in the report.
    What are your versions? You can check them from Tools->Preferences->Extensions

  • Report Builder showing duplicate data

    Hi everyone!
    When adding sub reports to my table, it now duplicates/repeats the data many times.
    why is this happening and how do I stop it?
    Thanks :)
    Allana

    Hi Allana,
    When I directly insert a subreport without any parameters to a details row cell, I can easily reproduce this issue in my environment. Generally, if we want to avoid this issue, we must design a parameterized report (for example, a report that shows the details
    for a specific customer) as the subreport. For more details, please refer to the following steps:
    Create a parameter, then add a filter based on the parameter to filter the data in the subreport.
    In the main report, insert the subreport with the corresponding parameter values.
    Then we can filter the subreport based on the parameter values to eliminate the duplicate data. Besides, we can also add a group in the main report to avoid the duplicate data in main report.
    References:
    Tutorial: Adding Parameters to a Report (SSRS)
    Add a Subreport and Parameters (Report Builder and SSRS)
    If there are any other questions, please feel free to ask.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Duplicate records in report

    hello guys,
    he was asking,i have duplicate ecords in the report how do we rectify them?
    why and how the duplicate records come in reporting?how is it possible??
    pls explain me how can this is possible?
    thanks & regards

    Hi,
    It may be possible that your data target may be reading data from DSO (for eg).
    If this DSO have a keyfield as account but not center then in this case , the accounts with different centers but with same amount can acculmualte to duplicate data.
    This case may occur with a flat file load and the records need to be corrected in that case. Also the flat file can work directly in the case when we have both account & center as Keyfield for that particular DSO.
    This is scenario which can happen other than the above.
    Best Regards,
    Arpit 

  • To remove  duplicate  data using connect by prior

    Hi ,
    I want to details of the employee whom reporting to without duplication .
    In table data,one employee reporting to two employees.so that reporting to process is coming two times.
    Query:
    SELECT lpad(' ', (level - 1) * 2) || EMPLOYEE_NAME as EMP_NAME,SUP_BU AS BU_CODE,SUP_REP_BU,EMP_NO,EMPLOYEE_NAME,LEVEL AS THE_LEVEL
    FROM ATTD_REPORT_TO_VW
    WHERE EMP_NO IS NOT NULL
    CONNECT BY PRIOR SUP_BU = SUP_REP_BU
    START WITH SUP_BU = :p_bu
    BUT i get the duplicate data,SUPPOSE i remove the duplication using distinct keyword ,the order of hierarchical is going wrong.
    Pls provide the solution.
    Thanks ,
    Maran

    plz ask this question in seperate SQL/PLSQL forum and also provide more information with sample data

  • Issue in Due date analysis report S_ALR_87012168

    Hi Friends,
    We have a strange issue regarding due date analysis report (S_ALR_87012168)...
    When i execute this report it displays different amounts in "Not Due" column under 0-30 bucket. In 0-30 bucket, under not due column it shows 10,000 and under total open items column it shows 15,000. How it is possible?
    Thanks to everyone
    Anifi

    Hi,
    This has been closed due to Local and Document currency prob...
    Thanks

  • BTREE and duplicate data items : over 300 people read this,nobody answers?

    I have a btree consisting of keys (a 4 byte integer) - and data (a 8 byte integer).
    Both integral values are "most significant byte (MSB) first" since BDB does key compression, though I doubt there is much to compress with such small key size. But MSB also allows me to use the default lexical order for comparison and I'm cool with that.
    The special thing about it is that with a given key, there can be a LOT of associated data, thousands to tens of thousands. To illustrate, a btree with a 8192 byte page size has 3 levels, 0 overflow pages and 35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note that I wrote "can", since some keys only have a few dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default lexical ordering with set_dup_compare is OK, so I don't touch that. I'm getting the data items sorted as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA) performance", due to a lot of disk read operations.
    While there may be a lot of reasons for this anomaly, I suspect BDB spends a lot of time tracking down duplicate data items.
    I wonder if in my case it would be more efficient to have a b-tree with as key the combined (4 byte integer, 8 byte integer) and a zero-length or 1-length dummy data (in case zero-length is not an option).
    I would loose the ability to iterate with a cursor using DB_NEXT_DUP but I could simulate it using DB_SET_RANGE and DB_NEXT, checking if my composite key still has the correct "prefix". That would be a pain in the butt for me, but still workable if there's no other solution.
    Another possibility would be to just add all the data integers as a single big giant data blob item associated with a single (unique) key. But maybe this is just doing what BDB does... and would probably exchange "duplicate pages" for "overflow pages"
    Or, the slowdown is a BTREE thing and I could use a hash table instead. In fact, what I don't know is how duplicate pages influence insertion speed. But the BDB source code indicates that in contrast to BTREE the duplicate search in a hash table is LINEAR (!!!) which is a no-no (from hash_dup.c):
         while (i < hcp->dup_tlen) {
              memcpy(&len, data, sizeof(db_indx_t));
              data += sizeof(db_indx_t);
              DB_SET_DBT(cur, data, len);
              * If we find an exact match, we're done. If in a sorted
              * duplicate set and the item is larger than our test item,
              * we're done. In the latter case, if permitting partial
              * matches, it's not a failure.
              *cmpp = func(dbp, dbt, &cur);
              if (*cmpp == 0)
                   break;
              if (*cmpp < 0 && dbp->dup_compare != NULL) {
                   if (flags == DB_GET_BOTH_RANGE)
                        *cmpp = 0;
                   break;
    What's the expert opinion on this subject?
    Vincent
    Message was edited by:
    user552628

    Hi,
    The special thing about it is that with a given key,
    there can be a LOT of associated data, thousands to
    tens of thousands. To illustrate, a btree with a 8192
    byte page size has 3 levels, 0 overflow pages and
    35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note
    that I wrote "can", since some keys only have a few
    dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default
    lexical ordering with set_dup_compare is OK, so I
    don't touch that. I'm getting the data items sorted
    as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA)
    performance", due to a lot of disk read operations.In general, the performance would slowly decreases when there are a lot of duplicates associated with a key. For the Btree access method lookups and inserts have a O(log n) complexity (which implies that the search time is dependent on the number of keys stored in the underlying db tree). When doing put's with DB_NODUPDATA leaf pages have to be searched in order to determine whether the data is not a duplicate. Thus, giving the fact that for each given key (in most of the cases) there is a large number of data items associated (up to thousands, tens of thousands) an impressive amount of pages have to be brought into the cache to check against the duplicate criteria.
    Of course, the problem of sizing the cache and databases's pages arises here. Your size setting for these measures should tend to large values, this way the cache would be fit to accommodate large pages (in which hundreds of records should be hosted).
    Setting the cache and the page size to their ideal values is a process of experimenting.
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/pagesize.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/cachesize.html
    While there may be a lot of reasons for this anomaly,
    I suspect BDB spends a lot of time tracking down
    duplicate data items.
    I wonder if in my case it would be more efficient to
    have a b-tree with as key the combined (4 byte
    integer, 8 byte integer) and a zero-length or
    1-length dummy data (in case zero-length is not an
    option). Indeed, these should be the best alternative, but testing must be done first. Try this approach and provide us with feedback.
    You can have records with a zero-length data portion.
    Also, you could provide more information on whether or not you're using an environment, if so, how did you configure it etc. Have you thought of using multiple threads to load the data ?
    Another possibility would be to just add all the
    data integers as a single big giant data blob item
    associated with a single (unique) key. But maybe this
    is just doing what BDB does... and would probably
    exchange "duplicate pages" for "overflow pages"This is a terrible approach since bringing an overflow page into the cache is more time consuming than bringing a regular page, and thus performance penalty results. Also, processing the entire collection of keys and data implies more work from a programming point of view.
    Or, the slowdown is a BTREE thing and I could use a
    hash table instead. In fact, what I don't know is how
    duplicate pages influence insertion speed. But the
    BDB source code indicates that in contrast to BTREE
    the duplicate search in a hash table is LINEAR (!!!)
    which is a no-no (from hash_dup.c):The Hash access method has, as you observed, a linear search (and thus a search time and lookup time proportional to the number of items in the buckets, O(1)). Combined with the fact that you don't want duplicate data than hash using the hash access method may not improve performance.
    This is a performance/tunning problem and it involves a lot of resources from our part to investigate. If you have a support contract with Oracle, then please don't hesitate to put up your issue on Metalink or indicate that you want this issue to be taken in private, and we will create an SR for you.
    Regards,
    Andrei

  • DTP Error: Duplicate data record detected

    Hi experts,
    I have a problem with loading data from DataSource to standart DSO.
    In DS there are master data attr. which have a key  containing id_field.
    In End routine I make some operations which multiple lines in result package and fill new date field - defined in DSO ( and also in result_package definition )
    I.E.
    Result_package before End routine:
    __ Id_field ____ attra1 ____  attr_b  ...___   attr_x ____ date_field
       ____1________ a1______ b1_________ x1         
       ____2________ a2______ b2_________ x2       
    Result_package after End routine:
    __ Id_field ____ attra1 ____  attr_b  ..___   attr_x ____ date_field
       ____1________ a1______ b1_________ x1______d1         
       ____2________ a1______ b1_________ x1______d2    
       ____3________ a2______ b2_________ x2______d1         
       ____4________ a2______ b2_________ x2______d2   
    The  date_field (date type)  is in a key fields in DSO
    When I execute DTP I have an error in section Update to DataStore Object: "Duplicate data record detected "
    "During loading, there was a key violation. You tried to save more than one data record with the same semantic key."
    As I know the result_package key contains all fields except fields type i, p, f.
    In simulate mode (debuging) everything is correct and the status is green.
    In DSO I have uncheched checkbox "Unique Data Records"
    Any ideas?
    Thanks in advance.
    MG

    Hi,
          In the end routine, try giving
    DELETE ADJACENT DUPLICATES FROM RESULT_PACKAGE COMPARING  XXX  YYY.
    Here XXX and YYY are keys so that you can eliminate the extra duplicate record.
    Or you can even try giving
        SORT itab_XXX BY field1 field2  field3 ASCENDING.
        DELETE ADJACENT DUPLICATES FROM itab_XXX COMPARING field1 field2  field3.
    this can be given before you loop your internal table (in case you are using internal table and loops)  itab_xxx is the internal table.
    field1, field2 and field 3 may vary depending on your requirement.
    By using the above lines, you can get rid of duplicates coming through the end routine.
    Regards
    Sunil
    Edited by: Sunny84 on Aug 7, 2009 1:13 PM

  • Data Quality Report Error

    In DS 12.1.1.0,
    when I try to open a Data Quality Report in the Management Console I get a new Windows with the following error message:
    Fehler
    Fehler bei der Seitenformatierung: FormulaFunction hat eine unerwartete Ausnahme von der 'evaluate'-Methode ausgelöst. 
    Tomcat log stdout says:
    19-05-09 10:58:59:708 - {ERROR} sdk.JRCCommunicationAdapter Thread [http-28080-Processor24];  JRCAgent5 detected an exception: Fehler bei der Seitenformatierung: FormulaFunction hat eine unerwartete Ausnahme von der 'evaluate'-Methode ausgelöst.
         at com.crystaldecisions.sdk.occa.report.lib.ReportSDKException.throwReportSDKException(Unknown Source)
         at com.businessobjects.reports.sdk.b.i.byte(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.request(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.y.a(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.r.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.cf.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportSource.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportSource.getPage(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.AdvancedReportSource.getPage(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.NonDCPAdvancedReportSource.getPage(Unknown Source)
         at com.crystaldecisions.report.web.event.ac.a(Unknown Source)
         at com.crystaldecisions.report.web.event.ac.a(Unknown Source)
         at com.crystaldecisions.report.web.event.b2.a(Unknown Source)
         at com.crystaldecisions.report.web.event.b7.broadcast(Unknown Source)
         at com.crystaldecisions.report.web.event.av.a(Unknown Source)
         at com.crystaldecisions.report.web.WorkflowController.do(Unknown Source)
         at com.crystaldecisions.report.web.WorkflowController.doLifecycle(Unknown Source)
         at com.crystaldecisions.report.web.ServerControl.a(Unknown Source)
         at com.crystaldecisions.report.web.ServerControl.processHttpRequest(Unknown Source)
         at org.apache.jsp.jsp.dqcrystalviewer_jsp._jspService(dqcrystalviewer_jsp.java:274)
         at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:334)
         at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:314)
         at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:264)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
         at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:672)
         at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463)
         at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398)
         at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:301)
         at org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1063)
         at org.apache.struts.action.RequestProcessor.processForwardConfig(RequestProcessor.java:386)
         at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:229)
         at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1194)
         at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:689)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
         at com.acta.webapp.mdreport.servlet.JSFilter.doFilter(Unknown Source)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
         at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
         at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
         at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126)
         at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
         at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
         at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148)
         at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:869)
         at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:664)
         at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527)
         at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:80)
         at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684)
         at java.lang.Thread.run(Thread.java:595)
    Any idea how this can happen?
    Regards,
        Martin
    Edited by: Martin Bernhardt on May 19, 2009 11:03 AM

    Martin,
    can you try to change the Settings of your Internet Explorer:
    Tools --> Internet Options --> General/Appearance/Languages
    Add "English (United States) [en-US]" and move it to first place above the "German (Germany) [de]" entry?
    Niels

  • Data quality report under management consol

    I am using BO DS 4. and trying to use the data quality report under management consol. For me these reports are getting generated for every run of the address cleanse Job.
    Is it possible to get a consolidated report for a day of the Job rather then for each run? If yes, please let me know how I can do that.

    Please respond, If any body has any clue on this

  • Delivery Date-Wise Report

    Dear all
    Can I have a Delivery Date wise report for Purchase Order.
    (T.Code) and what the paramenters that I have to input.
    Manoj

    Hi
    You can get it in the ME series 2L,2M ,2N etc by giving the delivery date range but the delivery dates will not be displayed in the report. To see the PO wise delievry date see the table EKET.
    DBehera

  • How to delete the duplicate data  from PSA Table

    Dear All,
    How to delete the duplicate data  from PSA Table, I have the purchase cube and I am getting the data from Item data source.
    In PSA table, I found the some cancellation records for that particular records quantity  would be negative for the same record value would be positive.
    Due to this reason the quantity is updated to target but the values would summarized and got  the summarized value  of all normal and cancellation .
    Please let me know the solution how to delete the data while updating to the target.
    Thanks
    Regards,
    Sai

    Hi,
    in deleting the records in PSA table difficult and how many you will the delete.
    you can achieve the different ways.
    1. creating the DSO maintain the some key fields it will overwrite the based on key fields.
    2. you can write the ABAP logic deleting the duplicate records at info package level check with the your ABAPer.
    3.you can restrict the cancellation records at query level.
    Thanks,
    Phani.

Maybe you are looking for

  • Financial Reporting - Planning 11.1.2.1 Data Source

    We are trying to do salary planning, specifically new hire planning at an employee level. Is it possible to have an account of data type text that users could input into a form, and then build reports in Financial Reporting Studio that would display

  • Form with variable number of fields?

    I need to write a data entry form in JSF. It's the standard kind of things, a series of labels and textFields and checkboxes and so on. The number of items is not known in advance. It is data-dependent. There might be 1 label-value pair sometimes and

  • Apple TV as back up?

    Does anyone know if a Apple TV unit can be used for Time Machine? I have an iMac and after purchasing a wireless mouse and keyboard, I do not want another cable and box protruding from my slick "look no cables" iMac.

  • LabVIEW web services import wizard can only import .asmx?wsdl URLs only?

    Hi, I am new to using web services from labVIEW. I would like to import list of methods a web service is providing into LabVIEW VIs is there any limitation in terms of the Web services should have been developed only in .net Not in Java or C etc. I a

  • I forgot my skype name, how do I get it back or cr...

    I forgot my skype name how do I retrieve it or get new one?