Missing records while fetching data from Data mart

Hi,
I have some missing records in the ODS.The data is fetched from the other BW system.
It is a delta load & all the loads are succesfull as of now.But still some records are missing.
If i see in reconstruction tab, some requests are showing the Transfer structure status as clock(transfer stucture has changed sine the last request). Is it because of this time stamp status got changed ?
I have done reinitialization & the missing Datas are fetched .But i would like to know the cause of missing records.
Regrads,
ANita

Hi kedar,
If there was a time stamp difference ,the data load should have got failed.All the delta loads were succesfull.
But now they realised there are some missing records.Just for analysis purpose , i was looking into reconstruction tab & the transfer structure status was displayed as clock.
But actually there was no change in the transfer stucture.
Sometimes we used to get timestamp error &
we used to replicate the data source & activate the transfer structure.
To avoid these things in future what needs to be done.As this load gets triggered through process chain & each time the data load status was succesful.
Every time we cant do replication of datasource while loading through Process chain unless the transfer structure gets changed.
But my concern was is this the cause for missing records or something else.
Regards,
ANita

Similar Messages

  • Error while loading Reported Financial Data from Data Stream

    Hi Guys,
    I'm facing the following error while loading Reported Financial Data from Data Stream:
    Message no. UCD1003: Item "Blank" is not defined in Cons Chart of Accts 01
    The message appears in Target Data.  Item is not filled in almost 50% of the target data records and the error message appears.
    Upon deeper analysis I found that Some Items are defined with Dr./Cr. sign of + and with no breakdown.  When these items appear as negative (Cr.) in the Source Data, they are not properly loaded to the target data.  Item is not filled up, hence causing the error.
    For Example: Item "114190 - Prepayments" is defined with + Debit/Credit Sign.  When it is posted as negative / Credit in the source data, it is not properly written to the target.
    Should I need to define any breakdown category for these items?  I think there's something wrong with the Item definitions OR I'm missing something....
    I would highly appreciate your quick assistance in this.
    Kind regards,
    Amir

    Found the answer with OSS Note: 642591..... 
    Thanks

  • Problems While Extracting Hours From Date Field

    Hi Guys,
    Hope you are doing well.
    I am facing some problems while extracting hours from date field. Below is an example of my orders table:-
    select * from orders;
    Order_NO     Arrival Time               Product Name
    1          20-NOV-10 10:10:00 AM          Desktop
    2          21-NOV-10 17:26:34 PM          Laptop
    3          22-JAN-11 08:10:00 AM          Printer
    Earlier there was a requirement that daily how many orders are taking place in the order's table, In that I used to write a query
    arrival_time>=trunc((sysdate-1),'DD')
    and arrival_time<trunc((sysdate),'DD')
    The above query gives me yesterday how many orders have been taken place.
    Now I have new requirement to generate a report on every 4 hours how many orders will take place. For an example if current time is 8.00 AM IST then the query should fetch from 4.00 AM till 8 AM how many orders taken place. The report will run next at 12.00 PM IST which will give me order took place from 8.00 AM till 12.00 PM.
    The report will run at every 4 hours a day and generate report of orders taken place of last 4 hours. I have a scheduler which will run this query every hours, but how to make the query understand to fetch order details which arrived last 4 hours. I am not able to achieve this using trunc.
    Can you please assist me how to make this happen. I have checked "Extract" also but I am not satisfied.
    Please help.
    Thanks In Advance
    Arijit

    you may try something like
    with testdata as (
      select sysdate - level/24 t from dual
      connect by level <11
    select
      to_char(sysdate, 'DD-MM-YYYY HH24:MI:SS') s
    , to_char(t, 'DD-MM-YYYY HH24:MI:SS') t from testdata
    where
    t >= trunc(sysdate, 'HH') - numtodsinterval(4, 'HOUR')
    S     T
    19-06-2012 16:08:21     19-06-2012 15:08:21
    19-06-2012 16:08:21     19-06-2012 14:08:21
    19-06-2012 16:08:21     19-06-2012 13:08:21
    19-06-2012 16:08:21     19-06-2012 12:08:21trunc ( ,'HH') truncates the minutes and seconds from the date.
    Extract hour works only on timestamps
    regards
    Edited by: chris227 on 19.06.2012 14:13

  • Error while extracting data from data source 0RT_PA_TRAN_CONTROL, in RSA7

    Hi Gurs,
    I'm getting the below error while extracting data from data source 0RT_PA_TRAN_CONTROL, in RSA7. (Actullly this is IS Retail datasource used to push POSDM data into BI cubes)
    The error is:
    Update mode "Full Upload" is not supported by the extraction API
    Message no. R3011
    Diagnosis
    The application program for the extraction of the data was called using update mode "Full Upload". However, this is not supported by the InfoSource.
    System Response
    The data extraction is terminated.
    Procedure
    Check for relevant OSS Notes, or send a problem message of your own.
    Your help in this regd. would be highly appreciated.
    Thanks,
    David.

    Hi David,
    I have no experience with IS Retail data sources. But as message clearly say this DS is not suppose to be ran in Full mode.
    Try to switch you DTPs/Infopackages to Delta mode.
    While to checking extraction in source system, within TA RSA3 = Extractor checker, kindly switch Update mode field to Delta.
    BR
    m./

  • When to refresh Servlet data from Data Base

    Hello all,
    I have a servlet that retrive few hundreds thousands records from data base table.
    The data in data base table being updated once or twice in every week.
    Since same servlet instance serve all users, that access the servlet many times a day.
    I would like to avoid retriving the data from data base on each servlet access.
    and make the users use same data already retrieved and kept in servlet members.
    First, what is the best way to avoid data retrive from data base on each servlet access?
    and how could I have some kind of trigger that will refresh servlet data from data base every few days?
    Thanks in advance for every idea.
    Ami

    Java_A wrote:
    Thanks Saish for your reply.
    I'm not using DAO in my application but retrive the data from BI data base using a web service. response time querying the BI data base is not quick enuogh.
    Since, I wouldn't want to query the BI server on each servlet access.
    Because the data I retrived at the begining using the web service contains all required data for all servlet requests, I thought to store the data (~200K rows) once in the servlet which will be using for all requests.
    Why not store the results locally in your own database after you fetch them?
    This still leave me with the questions: in which event should I query the BI data, and also when or in which event should I update the data again from BI server?
    Query at startup, an user demand, when data becomes stale. It depends on your requirements.
    >
    Thanks
    Ami- Saish

  • Unable to access the data from Data Management Gateway: Query timeout expired

    Hi,
    Since 2-3 days the data refresh is failing on our PowerBI site. I checked below:
    1. The gateway is in running status.
    2. Data source is also in ready status and test connection worked fine too.
    3. Below is the error in System Health -
    Failed to refresh the data source. An internal service error has occurred. Retry the operation at a later time. If the problem persists, contact Microsoft support for further assistance.        
    Error code: 4025
    4. Below is the error in Event Viewer.
    Unable to access the data from Data Management Gateway: Query timeout expired. Please check 1) whether the data source is available 2) whether the gateway on-premises service is running using Windows Event Logs.
    5. This is the correlational id for latest refresh failure
    is
    f9030dd8-af4c-4225-8674-50ce85a770d0
    6.
    Refresh History error is –
    Errors in the high-level relational engine. The following exception occurred while the managed IDataReader interface was being used: The operation has timed out. Errors in the high-level relational engine. The following exception occurred while the
    managed IDataReader interface was being used: Query timeout expired. 
    Any idea what could have went wrong suddenly, everything was working fine from last 1 month.
    Thanks,
    Richa

    Never mind, figured out there was a lock on SQL table which caused all the problems. Once I released the lock it PowerPivot refresh started working fine.
    Thanks.

  • Runtime error when Transfering data from data object to a file

    Hi everybody,
    I'm having a problem when I transfer data from data object to file. The codes like following :
    data : full_path(128).
    OPEN DATASET full_path FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.
    and transfer data from flat structure to this file full_path
      move:    tab                 to c_output-tab_5,
                  tab                 to c_output-tab_4,
                  tab                 to c_output-tab_3.
      transfer c_output to full_path.      // Error Line
    The detail error like the following:
    For the statement
       "TRANSFER f TO ..."
    only character-type data objects are supported at the argument position
    "f".
    In this case. the operand "f" has the non-character-type "u". The
    current program is a Unicode program. In the Unicode context, the type
    'X' or structures containing not only character-type components are
    regarded as non-character-type.
    transfer c_output to full_path. " Line error
    Please help me to fix this issue !
    Thank you in advance !
    Edited by: Hai Nguyen on Mar 4, 2009 10:55 AM

    Hi Mickey,
    Thanks for your answer,
    I found out that the structure c_output have the field with data type X. I know that the cause of the issue.
          begin of c_output,
             vbeln(10),
             tab_5 like tab,
             posnr(6),
             tab_4 like tab,
             topmat(18),
             tab_3 like tab,
         end  of c_output.
    data : tab type X value 9.
    Could you tell me how to fix it ? What I have to do in this situation ?
    Thank you very much !

  • Delete Transaction Data from date to date

    Hi All,
    we want to delete transactional data from date to to date
    is there any way to delete data from date to todate?
    We are aware of following tcodes
    OBR1- Reset transaction data
    CXDL - Delete transaction data from ledger
    But there is no period/from date to date option available
    Example:
    we are in 2010 now we want to delete data from 2005- 2007 and we don't want to archive
    Thanks in advance
    Regards,
    MS

    Hi Eli,
    Thanks for the reply,
    Yes, you are right its not right to delete data based on the period... but we have such kind of typical scenario
    Let me get some other opinion
    Regards,
    MS

  • Error message while loading data from data mart

    Please can some one help on this.
    While loading data from one cube to another (I am actually creating a backup of original cube) for 1 record I am getting the error message " internal error occured with time split" The long text of error message says Message no. RSAU101

    Take a look at this thread:
    Internal error occured with time split
    Hope it helps,
    Gilad

  • Web Template is not able to fetch data from Data Provider

    hi friends,
               i have created a reporting agent for a particular query and given all the necessary parameters, defined the variants and activated this and created a scheduling package to this ,assigned my query to the scheduling package
               later in the web template,in the web item tab i maintained  the read mode as precalculated web template and the final report on the web is saying web tempalte is not able to get the data from the data provider  ,
    am i missing any where...
            i look forward to your help ,
    regards,
    sasidhar gunturu

    Hi,
    use the correct link to call the report.
    you can check the links : Re: Concept Of Precalculated Template
    https://www.sdn.sap.com/irj/sdn/developerareas/bi?rid=/webcontent/uuid/a8cd1f71-0a01-0010-4783-f119b6132d25 [original link is broken]
    Regards
    Happy Tony

  • Error while viewing data from data store

    Hello Gurus,
    We are facing issue with driver when we try to view data from a data store related to Hyperion Essbase technology.
    ODI version is 11.1.1.6.
    Following is the error that we are getting:
    java.lang.IllegalArgumentException: Driver name cannot be empty
         at org.springframework.util.Assert.hasText(Assert.java:161)
         at com.sunopsis.sql.SnpsConnection.setDriverName(SnpsConnection.java:302)
         at com.sunopsis.dwg.dbobj.DwgConnectConnection.setDefaultConnectDefinition(DwgConnectConnection.java:380)
         at com.sunopsis.dwg.dbobj.DwgConnectConnection.<init>(DwgConnectConnection.java:274)
         at com.sunopsis.dwg.dbobj.DwgConnectConnection.<init>(DwgConnectConnection.java:288)
         at oracle.odi.core.datasource.dwgobject.support.DwgConnectConnectionCreatorImpl.createDwgConnectConnection(DwgConnectConnectionCreatorImpl.java:53)
         at com.sunopsis.graphical.frame.edit.EditFrameTableData.snpsInitializeSnpsComponentsSpecificRules(EditFrameTableData.java:85)
         at com.sunopsis.graphical.frame.SnpsEditFrame.snpsInitialize(SnpsEditFrame.java:1413)
         at com.sunopsis.graphical.frame.edit.AbstractEditFrameGridBorland.initialize(AbstractEditFrameGridBorland.java:623)
         at com.sunopsis.graphical.frame.edit.AbstractEditFrameGridBorland.<init>(AbstractEditFrameGridBorland.java:868)
         at com.sunopsis.graphical.frame.edit.EditFrameTableData.<init>(EditFrameTableData.java:50)
         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
         at oracle.odi.ui.editor.AbstractOdiEditor$1.run(AbstractOdiEditor.java:176)
         at oracle.ide.dialogs.ProgressBar.run(ProgressBar.java:655)
         at java.lang.Thread.run(Thread.java:662)Is there any specific JAR file related to Hyperion Essbase ?
    and where do we find the default drivers that come with ODI?
    Please help.
    Thanks,
    Santy.

    You cannot view the data from an essbase data store as it isn't configured with a jdbc driver that supports this function

  • Performance problem in select data from data base

    hello all,
    could you please suggest me which select statement is good for fetch data form data base if data base contain more than 10 lac records.
    i am using SELECT PACKAGE SIZE n statement,  but it's taking lot of time .
    with best regards
    srinivas rathod

    Hi Srinivas,
    if you have huge data and selecting ,you could decrease little bit time if you use better techniques.
    I do not think SELECT PACKAGE SIZE  will give good performance
    see the below examples :
    ABAP Code Samples for Simple Performance Tuning Techniques
    1. Query including select and sorting functionality
    tables: mara, mast.
        data: begin of itab_new occurs 0,
                 matnr like mara-matnr,
                 ernam like mara-ernam,
                 mtart like mara-mtart,
                 matkl like mara-matkl,
                 werks like mast-werks,
               aenam like mast-aenam,
    stlal like mast-stlal,
         end of itab_new.
    select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
    into table itab_new from mara as f inner join mast as g on
    fmatnr = gmatnr where gstlal = '01' order by fernam.
    Code B
    tables: mara, mast.
    data: begin of itab_new occurs 0,
          matnr like mara-matnr,
          ernam like mara-ernam,
          mtart like mara-mtart,
          matkl like mara-matkl,
          werks like mast-werks,
          aenam like mast-aenam,
          stlal like mast-stlal,
    end of itab_new.
    select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
    into table itab_new from mara as f inner join mast as g on f~matnr =
    gmatnr where gstlal = '01'.
    sort itab_new by ernam.
    Both the above codes essentially do the same function, but the execution time for code B is considerably lesser than that of Code A. Reason: The Order by clause associated with a select statement increases the execution time of the statement, so it is profitable to sort the internal table once after selecting the data.
    2. Performance Improvement Due to Identical Statements – Execution Plan
    Consider the below queries and their levels of efficiencies is saving the execution
    tables: mara, mast.
    data: begin of itab_new occurs 0,
          matnr like mara-matnr,
          ernam like mara-ernam,
          mtart like mara-mtart,
          matkl like mara-matkl,
          werks like mast-werks,
          aenam like mast-aenam,
          stlal like mast-stlal,
    end of itab_new.
    select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
    into table itab_new from mara as f inner join mast as g on f~matnr =
    gmatnr where gstlal = '01' .
    sort itab_new.
    select fmatnr fernam
    fmtart fmatkl gwerks gaenam g~stlal
    into table itab_new from mara as
    f inner join mast as g on f~matnr =
    gmatnr where gstlal
    = '01' .
    Code D (Identical Select Statements)
    tables: mara, mast.
    data: begin of itab_new occurs 0,
          matnr like mara-matnr,
          ernam like mara-ernam,
          mtart like mara-mtart,
          matkl like mara-matkl,
          werks like mast-werks,
          aenam like mast-aenam,
          stlal like mast-stlal,
    end of itab_new.
    select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
    into table itab_new from mara as f inner join mast as g on f~matnr =
    gmatnr where gstlal = '01' .
    sort itab_new.
    select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
    into table itab_new from mara as f inner join mast as g on f~matnr =
    gmatnr where gstlal = '01' .
    Both the above codes essentially do the same function, but the execution time for code B is considerably lesser than that of Code A. Reason: Each SQL statement during the process of execution is converted into a series of database operation phases. In the second phase of conversion (Prepare phase) an “execution  plan” is determined for the current SQL statement and it is stored, if in the program any identical select statement is used, then the same execution plan is reused to save time. So retain the structure of the select statement as the same when it is used more than once in the program.
    3. Reducing Parse Time Using Aliasing
    A statement which does not have a cached execution plan should be parsed before execution; this parsing phase is a highly time and resource consuming, so parsing time for any sql query must include an alias name in it for the following reason.
    1.     Providing the alias name will enable the query engine to resolve the tables to which the specified fields belong to.
    2.     Providing a short alias name, (a single character alias name) is more efficient that providing a big alias name.
    Code E
    select jmatnr jernam jmtart jmatkl
    gwerks gaenam g~stlal into table itab_new from mara as
    j inner join mast as g on jmatnr = gmatnr where
                g~stlal = '01' .
    In the above code the alias name used is ‘ j ‘.
    4. Performance Tuning Using Order by Clause
    If in a SQL query you are going to  read a particular database record based on some key values mentioned in the select statement, then the read query can be very well optimized by ordering the fields in the same order in which we are going to read them in the read query.
    Code F
    tables: mara, mast.
    data: begin of itab_new occurs 0,
          matnr like mara-matnr,
          ernam like mara-ernam,
          mtart like mara-mtart,
          matkl like mara-matkl,
          end of itab_new.
    select MATNR ERNAM MTART MATKL from mara into table itab_new where
    MTART = 'HAWA' ORDER BY  MATNR ERNAM  MTART MATKL.
    read table itab_new with key MATNR = 'PAINT1'   ERNAM = 'RAMANUM'
    MTART = 'HAWA'   MATKL = 'OFFICE'.
    Code G
    tables: mara, mast.
    data: begin of itab_new occurs 0,
          matnr like mara-matnr,
          ernam like mara-ernam,
          mtart like mara-mtart,
          matkl like mara-matkl,
          end of itab_new.
    select MATNR ERNAM MTART MATKL from mara into table itab_new where
    MTART = 'HAWA' ORDER BY  ERNAM MATKL MATNR MTART.
    read table itab_new with key MATNR = 'PAINT1'   ERNAM = 'RAMANUM'
    MTART = 'HAWA'   MATKL = 'OFFICE'.
    In the above code F, the read statement following the select statement is having the order of the keys as MATNR, ERNAM, MTART, MATKL. So it is less time intensive if the internal table is ordered in the same order as that of the keys in the read statement.
    5. Performance Tuning Using Binary Search
    A very simple but useful method of fine tuning performance of a read statement is using ‘Binary search‘ addition to it. If the internal table consists of more than 20 entries then the traditional linear search method proves to be more time intensive.
    Code H
    select * from mara into corresponding fields of table intab.
    sort intab.     
    read table intab with key matnr = '11530' binary search.
    Code I
    select * from mara into corresponding fields of table intab.
    sort intab.     
    read table intab with key matnr = '11530'.
    Thanks
    Seshu

  • Remote exception while fetching information from ALI collaboration 4.5

    We have seen a remote exception recurring from IDK api while accessing collaboration service.
    It says :
    “java.rmi.RemoteException: Unexpected fault was returned by the server (faultcode: Server.userException, faultstring: org.xml.sax.SAXParseException: XML document structures must start and end within the same entity.).
    at com.plumtree.remote.prc.collaboration.project.ProjectManagerWrapper.queryProjects(ProjectManagerWrapper.java:177)”
    we are using idk api 6.0 to fetch information from ali collaboration 4.5 concurrently.
    Please guide us

    First and foremost question....do you think is this THE ONLY & BEST possible way to implement your business logic ? To me it seems this can be achieved using much less code...more the code, more the chances of errors, difficult to debug and difficult to maintain...
    Anyways, it is quite difficult to pinpoint the error without the knowledge of underlying table structure and data. Here are some observations...
    In your outermost loop, you are doing this:
    FETCH cur_accdetail BULK COLLECT INTO vl_t_LogDate; If this step yields data, you are populating vl_t_ModStEnDate collection.
    However, if FETCH results into an exception (maybe NO_DATA_FOUND), you are writing the error to a file and program CONTINUES.
    In next logic, you are directly refereing to vl_t_ModStEnDate collection, without verifying whether it is populated. THAT MAY BE THE CULPRIT. NO CLAIMS...JUST GUESSES....
    BUT, I sincerely request you to revisit your requirement and see if you really need to have this much code to address the same.
    p.s. If you feel you don't have that much time (close deadlines etc...), take my word, it will be worth doing it now.

  • Full upload misses records while delta brings them

    Hi experts,
    I'm using 2LIS_12_VCHDR extractor and with delta exctractions everything is worknig correct. The problem comes when I want to make a full upload. It is not taking the last records (while delta took them). It is not bringing records into BW since the exact day I reloaded the 12 LIS tables.
    If I create a sales order, and this order is delivered, my delta infopackage brings it into BW, but my full infopackage is not bringing it.
    Any clues?
    Points will be given,
    Thanks

    My scenario is the next one for 2LIS_12_VCHDR:
    - Full upload on Nov 13th (6 months ago) and it takes 170103 records.
    - I make another full upload today and I obtain 107103 records (the same as 6 months ago, and there are a lot more).
    - I make an init request > I create a delivery in ECC> I make a delta upload an it takes it.
    - Then I make a full upload and it is not taking it. I still have 170103 records.
    Do I have to reiniciate the 2LIS_12 in ECC (OLI8BW)?
    In the other hand I have the 2LIS_12_VAITM working with delta every day. How it will affect to this extractor if I reiniciate all the 2LIS_12? I'm going to miss data? I'm going to duplicate data?
    Thanks

  • CDC, journal data from Data Store won't load

    Hi, I was having problems yesterday with CDC and setting up a package to loop and load journal data, so today I decided to patch ODI with the 11g log miner and start again. I dropped the source schema and set up anew following the Rittman guide ... http://www.oracle.com/technology/pub/articles/rittman-odi.html
    I have run through starting the journal and it all executes without errors, I can add a record to my source table and "Extend Window" and then right click on my data store -> Changed Data Capture -> Journal Data and I can see my new record in the Designer.
    I have an interface with this data store as the source and I have the checkbox with "Journalized Data Only" checked ... but it just doesn't pick up on the new records in that data store's journal.
    Is there something I am missing? I'm sure once I can figure this out it will probably solve my problem building the package to loop through and do this repeatedly as well.
    Cheers
    Damian

    Hi, a bit more investigation and a bit more info, Arif, I wonder if you have any idea why it's doing this...
    My source table is called S1SPK_DET
    When I extend window in the designer and right click on the data source , it presents my new record, and the sql that it is running is looking at a view called JV$DS1SPK_DET (with a D following the $ ) with this query: select * from ODI_WORK.JV$DS1SPK_DET
    But when I run my interface that is supposed to be getting the Journalized data from this table, in step four ( 4 - Loading - SS_0 - Load data ) the Loading query is looking at another view JV$S1SPK_DET (Without the D following the $) ... and this view is empty. I will paste the loading query below.
    Does anyone know why I have these different views and why the load step looks at the empty one?
    select     
         S1SPK_DET.SPK_NO     C1_COURSE_ID,
         S1SPK_DET.SPK_VER_NO     C2_COURSE_VERSION,
         JRN_SUBSCRIBER     JRN_SUBSCRIBER,
         JRN_FLAG     JRN_FLAG,
         JRN_DATE     JRN_DATE
    from     ODI_WORK.JV$S1SPK_DET S1SPK_DET
    where     (1=1)
    AND JRN_SUBSCRIBER = 'SUNOPSIS' /* AND JRN_DATE < sysdate */

Maybe you are looking for

  • Windows XP Pro problem

    I need some advice. I bought Core Java I by Horstmann, but downloaded Java from Sun's website because the disk did not have everything I needed. I was previously able to run Java programs on a Windows 95 machine. I now have a Windows XP Pro machine.

  • Address Book no Import Option

    i've lost all my contacts recently on Adress Book fortunately there are from my Yahoo account, except that now i no longer can Import them. AB won't even give me the option to import, the option disabled. stef./.\

  • Java ME SDK 3.0 OSX jsr184

    When there will be a m3g support in the mac version? This is the only thing keeping me from using the IDE. Now i have to use netbeans inside vmware (winxp) just to be able to develop 3d games.

  • Read Values of Report Filters through VBA

    Hi, I am actually trying to read the values of report filters through VBA. In the image below I want to read the values for the Report Filter State and Team. Can you please help me with some sample code. Any help in this regard will be highly appreci

  • Itu2019s showing confirmed qty more that order qty in sales order

    Dear Guru's I am working on production server issue; the issue is delivery has been created more that sales order qty.  When I am seeing sales Oder schedule lines itu2019s showing confirmed qty more that order qty in sales order what could be the rea