Query for onhand  qunatity  with particular date  for eam

hi all ,
following query are showing wrong value for on hand quantity with particular date mean work order issue date for quantity .
can u show me where is problem in this query.
SELECT   ood.organization_name, wrov.concatenated_segments item_number,
         wrov.item_description description, wrov.item_primary_uom_code uom,
        -- SUM (on_hand.on_hand) quantity_in_stock,
          SUM (on_hand.on_hand) + wrov.quantity_issued quantity_in_stock,
         we.wip_entity_name work_order_number,
         TRUNC (we.creation_date) work_order_date,
         fu.user_name work_order_made_by, wrov.quantity_issued,
         trunc(cic.material_cost,2) issue_rate_per_qty,
         wrov.quantity_issued * trunc(cic.material_cost,2) quantity_value,
         --SUM (on_hand.on_hand) - wrov.quantity_issued balace_qty,
         SUM (on_hand.on_hand)  balace_qty,
           SUM (on_hand.on_hand)
         * trunc(cic.material_cost,2) COST,
         eam.ATTRIBUTE1 EQUIPEMENT_NO
        --  gcc.SEGMENT6 equipement_no
    FROM wip_entities we,
         wip_requirement_operations_v wrov,
         org_organization_definitions ood,
         mtl_onhand_total_mwb_v on_hand,
         fnd_user fu,
         cst_item_costs cic,
         eam_work_orders_v eam
--                    gl_code_combinations gcc,
--                    WIP_TRANSACTION_ACCOUNTS  wta ,
--                    WIP_TRANSACTIONS wt
   WHERE wrov.wip_entity_id = we.wip_entity_id
     AND ood.organization_id = wrov.organization_id
     AND wrov.inventory_item_id = on_hand.inventory_item_id
     AND on_hand.organization_id = ood.organization_id
     AND we.created_by = fu.user_id(+)
     AND cic.inventory_item_id = wrov.inventory_item_id
     AND cic.organization_id = wrov.organization_id
     --AND we.wip_entity_name = '179542'
     AND wrov.organization_id = nvl(:organization_id,wrov.organization_id)
            AND (we.wip_entity_name = :job_order_No OR :job_order_No IS NULL)
            and trunc(we.creation_date) between  nvl(:from_date ,trunc(we.creation_date)) and nvl(:too_date ,trunc(we.creation_date))
           and eam.WIP_ENTITY_NAME=we.wip_entity_name
           and eam.WORK_ORDER_STATUS in('Complete','Complete - No Charges' ,'Released')
--                 and gcc.CODE_COMBINATION_ID  =wta.REFERENCE_ACCOUNT
--                 and wta.TRANSACTION_ID =wt.TRANSACTION_ID
--                 and we.wip_entity_id  =wta.WIP_ENTITY_ID
GROUP BY ood.organization_name,
         wrov.concatenated_segments,
         wrov.item_description,
         wrov.item_primary_uom_code,
         we.wip_entity_name,
         we.creation_date,
         fu.user_name,
         wrov.quantity_issued,
         cic.material_cost,
          eam.ATTRIBUTE1
       --  gcc.SEGMENT6
       order by we.wip_entity_name

I don't think there is any special predicate to do that, Code is simply getting all children of given asset and returning them (Offcource there is some ordering logic). If You want exact informaiton I would suggest to use json call that DAM is making i.e.
HOST:POST/<PATH OF DAM>.assets.json
Or decompile AssetListServlet.java ans see how it is implemeted
Yogesh
www.wemblog.com

Similar Messages

  • I am planning to switch to the "iPhone for Life Plan" with unlimited data.  How do i determine what the balance is on my equipment installment plan?

    I am planning to switch to the "iPhone for Life Plan" with unlimited data.  How do i determine what the balance is on my equipment installment plan?

    Yes I am considering switching from the VZ Edge to Sprint iPhone for Life plan.
    Thanks,
    Adrian
    >> Personal information removed to comply with the Verizon Wireless Terms of Service <<
    Edited by:  Verizon Moderator

  • FG stock on a particular date for a specific customer

    Hi all,
    I need to develop one report in which i need FG stock for a particular customer on a particular date. For ex on 5/11/11 what was the FG stock against xyz customer.
    Is there any logic available for this requirement. can you please tell me which tables i should look for?
    Thanks

    FG stock for a particular customer on a particular date.
    Is it Make to Order stock? If it is not, then you may not find the details. If it is MTO, then use t.code MB5B. Using this, you can find stock on particular date.
    Regards,

  • Unable to Create PO for a Vendor with Particular payment terms

    Hi All,
    we have a scenarion in SRM 7.0 system where  the PO is ending up in error in SRM  for a vendor. In BBP_PD the error message says "Enter payt terms"  " enter GR Non Valuated".
    Now in my SRM system i can see the PO has got the payment terms and PO is for single account assignment.  i have generated the BAPI data for the BAPI_PO_CREATE1 and can confirm the erro is coming form backend and in the bapi the paymnet terms is correctly passed in the header record.
    I am not sure why this error from backend..   have anyone encountered similar issue.
    anything to check with paymet terms in backed. i have another payment terms working  and few of them are not working.
    cheers
    Iftekhar Alam.

    Note 1350184 - Payment terms from PO not transferred to backend
    BBP_PDHGP-PMNTTRMS
    Check payment term is available
    Have you creted this payterm from SRM or pulled from ECC ?
    Note 1350185 - Payment terms from PO not transferred to backend
    check each data in BBP_PAYTERM in the value and match with ecc payterm table. some times some conditions are misinterupted in SRM
    for ex.
    Muthu

  • How does your entity, or entities you have worked with, manage data for their dev/quality/cert environments?

    Background- We have development and certificiation environments for SAP changes. These environments are kept in sync by ensuring that we adhere to a progression for configuration changes- they always move thru the same dev, quality, ceritfication, production process with testing at each phase.
    As a business user, a challenge I encounter is having production type data available in the dev/qual/cert environments so that I can adequately test changes.  We refresh our environments periodically to a copy of production, including data.  After the refresh, no new data goes to these environments, only test data as entered by individual users and testers.  This means that tests performed soon after a refresh have a lot of relevent production data to use for testing, but as we move further thru the year, the data is less and less avialable and less relevant.
    Some of our changes are impacted by volume, but due to how we handle our environments, volume impacts are difficult to test or asess.
    How do you manage production-like data for your dev/qual/cert environments? Do you encounter this same issue or have you found a solution?
    Thank you!

    Thanks for your comments, it is clear you understand my plight.  The trouble is fully understanding what Apps and App Features are transferring data in the background any time you happen to turn WiFi to on (even if you have had it off most of the day or night).  Obviously things like Location Services can constantly be sending and receiving data from my iPhone without any action on my part.  Also if you have things like photo backup on the iCloud then each time you take a photo you are sending a copy out.  All App Updates if set to Automatic also can add up to quite a bit of Data.  Reading the News on AP or scrolling through FB News Feed is actually adding up to a lot of Data.  There could be other culprits that I am not even thinking of.  I don't want to turn Apps like Find My Phone off or turn iCloud off due to loosing the value of such a program entirely.  Again thanks for your quick response. 

  • Publication for Deski report with multiple data providers

    Hi,
    Has anyone been able to get a publication working that uses a Deski report with multiple data providers as the source? I'm trying to get a publication working that uses dynamic recipients and personalization. When I try to schedule the publication, I get the error "Object not found". 
    Thanks,
    Debbie

    Debbie,
    That's standard.
    It's useful for emulating outer joins in reports.
    Say you've got a sales report where you want to display all 12 months of the year in a crosstab whatever month you run in.
    We're only in May at the moment though. With one data provider (SALES), you'll get a crosstab with just up to May for your months.
    If you create a separate data provider called MONTHS to return the months in the current year, you will then have a merged dimension of YearMonth in both.
    In your crosstab if you just use YearMonth you'll get just the five months. If you qualify it with its data provider name (in our case MONTHS), you'll see the full twelve months shown.
    I hope that clears it up for your.
    Regards,
    Mark

  • Problem with "Insufficient data for image" and embedded JPEG 2000 Images

    I frequently download pdf from the http://www.glorecords.blm.gov web site.  They are reporting a problem with Reader Reader 10.1.4 and the pdf files they have.
    "("Insufficient data for image" and embedded JPEG 2000 Images)"
    I am experiencing the same "Insufficient data for image" error when opening their downloaded pdf and viewing in ACROBAT X 10.1.4.
    Can someone address this please?
    Win 7 sp1

    Read this:
    http://blogs.adobe.com/dmcmahon/2012/08/21/acrobat-insufficient-data-for-an-image-error-af ter-updating-to-10-1-4-or-9-5-2/

  • Query to get row with max values for distinct

    I have a table test with ID, ADID, MTMST columns.
    ID     ----ADID----     MTMST
    1     ----100----     24-MAR-12 08.17.09.000000 PM
    1     ----101----     24-MAR-12 08.18.15.000000 PM
    1     ----102----     24-MAR-12 08.18.56.000000 PM
    2     ----103----     24-MAR-12 08.19.21.000000 PM
    2     ----104----     24-MAR-12 08.19.36.000000 PM
    2     ----105----     24-MAR-12 08.19.46.000000 PM
    3     ----99----      22-MAR-12 09.48.22.000000 PM
    I need the rows with max ADID for each ID.
    I used the following query but it provided max ADID of the table but not the distinct ID
    select * from test where ADID in (select max(ADID) from test where id in (select distinct(id) from test where mtmst > sysdate -1))
    Result:*
    ID     ----ADID----     MTMST
    2     ----105----     24-MAR-12 08.19.46.000000 PM
    Expected result:*
    ID     ----ADID----     MTMST
    1     ----102----     24-MAR-12 08.18.56.000000 PM
    2     ----105----     24-MAR-12 08.19.46.000000 PM
    Thanks,
    Dheepan
    Edited by: Dheepan on Mar 24, 2012 9:53 AM

    select id, adid, mtmst from test where (id, adid) in (select id, max(adid) from test group by id) and MTMST>sysdate-1
    is the answer.

  • How to retrieve DELTA for Particular date for FIGL(ECC) into BI

    Hi,
    Model of this ODS is in BW3.5.  (DS --> ODS --> Infocube)
    My loads are daily from 0FI_GL_4 (line Items) into ODS. There was from error in the load, so I deleted the request (manage & and made QM status RED in monitor also) from ODS and also from PSA for a particular date(xx.xx.xxxx).
    Now I tried loading the Infopackage, but the data is not coming for that date (xx.xx.xxxx) into ODS.
    I have set to DELTA REPEAT in RSA7(ECC) , but after saving, it is again coming back to DELTA UPDATE.
    All the Previous and Next day load was fine,but i did not get the data for xx.xx.xxxx.
    In Selection screen for DS and Infopackage does not have 0CALDAY or 0PSTNG_DATE.
    Can you please help me out in telling how to get the data for that particular date (xx.xx.xxxx) into BI.
    DOES REPAIR FULL REQUEST SOLVE MY PROBLEM?
    Full Points will be assigned.
    Regards,
    Bindu.

    Hi Viren,
    Initially I have tried what you have said, but ERROR is coming as AS YOUR PREVIOUS DELTA IS INCORRECT, YOU CANNOT LOAD THIS DELTA. I am not getting any option of REPEAT DELTA.
    "I have set to DELTA REPEAT in RSA7(ECC) , but after saving, it is again coming back to DELTA UPDATE."
    As I am not getting REPEAT DELTA, to get that I tried saving the 0FI_GL_4 as DELTA REPEAT in RSA7 (ECC).
    Please let me know any more information required.
    Thanks,
    Bindu.

  • Recurring inspection for a batch with early date

    Dear Gurus,
    I have a batch with retest date 31.07.2010
    I need to create the inspection lot of recurring inspection for this batch today.
    I use transaction QA07, but this will not work for days earlier than today.
    I tried to create the lot manually, the lot was created but the quantity was not transferred to the Quality inspection and remained unrestricted.
    I need to create the lot and transfer the quantity to quality inspection....how can I handle this case?
    Regards,
    MaX

    Well,  nothing is unacceptable. It's just that it needs a lot of documentation.  You don't change the date and leave it.  You change the date, manually run QA07 to create the 09 inspection and immediately change the date back again.  With proper documentation and approval from the QA department this shouldn't be an issue.  I am assuming this is a one time thing.  It's not like you are advocating a new business process. 
    The batch record should already be self documenting already and show a record of all changes.  While the FDA would frown on this being a regular business practice, if it is something that happens once and the proper paperwork is done. It shouldn't be an issue. It would however cause an auditor to probably ask what caused this in the first place, what was the corrective action applied to prevent it from happening again, how many times has this happened and is it expected to happen in the future. If all of that can be answered, there shouldn't be any issue.
    FF

  • Query on a table with indexed date field

    I have a table with a date column which is indexed. If I run a query like "select column1 where date_field='20-JAN-04' for example it is fast and uses index.
    If I run select column1 where date < '20-JAN-04' it is slow and doesnt use the index. I logged a TAR and Oracle told me that this is to be expected as not using the index in this case is the most effiecient way of doing the query.
    Now my concept of an index is like the index of Yellow Pages(telephone directory) for example. In this example if I look for a name that is say "Halfords" or below, I can see all entries for Halfords and all the way to ZZZ in one block.
    I just cant see , in a common sense way why Oracle wont use the index in this type of query.
    George

    Using the concept of a telephone directory is wrong. In a telephone directory you have all information order by the name. However in your table (if it is not an IOT) you don't have all information/rows ordered by your date_field. Rather think at the document "Oracle9i Database Concepts" and it's index.
    Let's say you want to find all indexed words larger then "ISO SQL standard" (ok that doesn't make sense but it is just an example). So would it be faster to read the whole document or to lookup each word in the index and then read the entire page (Oracle block) to find the word.
    It's not allways easy to know in advance if the query will be faster over the index or a full table scan. what you need to do is to well analyze (dbms_stats) the table and it's index, in most cases Oracle chooses the right way. You may also use the hint /*+ index(table_name index_name) */ and will see if it would be faster over the index or not.
    A good document about that subject is:
    http://www.ioug.org/tech/IOUGinDefense.pdf
    HTH
    Maurice

  • Error in LSMW for MM41 - "No batch input data for screen SAPLMGMW 0100"

    I'm trying to create an LSMW for MM41 to create new Retail articles en Mass. This will be used to only create the Basic view as other LSMW's already exist to maintain the other views. I've created the Recording and setup the LSMW already. My data is read correctly from the source file. When I execute the batch session it fails with the following error - "No batch input data for screen SAPLMGMW 0100".
    I assume the fields in MM41 screen are not setup for batch processing. Could someone with more knowledge or experience in SAP Retail please confirm my assumption and explain where this is setup or provide an alternative to creating new articles en Mass via MM41? The direct input method is more suited for MM01 and not MM41.
    Regards
    Fawaaz

    Hi Guys,
    Thanks so much for all the prompt responses. My issue is now resolved.
    The issue, I'm sure, was my default settings in the MM41 screen which the recording probably took into account or the batch program didn't like. I deleted my recording and created a new one but before I started the recording, I went into the MM41 screen and removed all default values AND view selections. I made my Basic view selection constant in my recording and also in my field mapping. My LSMW for MM41 is now working perfectly.
    Just to answer some of the questions above, my LSMW was Reading and Converting the data in my source file fine. When I display the Read and Converted data, it shows the data fields in the correct place.
    Thanks again.
    Regards
    Fawaaz

  • Speed of simple query to complicated view with few data increases after 3rd execution

    Hello everyone,
    I'm developing an OLTP application where a complicated view exists that is queried often by the application through Tomcat.
    The application itself uses a framework that generates the SQLs on the fly, so no parameter binding is possible.
    What I experience is the following:
    If I execute the query from the application, it always takes about 3 seconds to complete, also for subsequent runs (Tomcat uses ojdbc6.jar)
    If I execute the same query from SQL Developer it is the same behaviour. If I execute from Toad, it's the same behaviour for the first two runs, afterwards it executes in <50ms.
    I assume that it has something to do with plan creation/caching. How can I prove it and - best - fix it?
    Right now I'm using 11.2 XE running on my laptop together with Tomcat. In production it will be on a 11.2 SE1.
    The plan itself shows little to no cost. This is expected, as the tables are almost empty and all joins are indexed as needed (the last row of 103 steps is):
    |   0 | SELECT STATEMENT                                        
    |                           
    |
    1 |  4876 |
    26  (12)| 00:00:01 |
    The statement itself is
      SELECT
        .....all view columns...
        FROM myview
       WHERE (myview.user_id = 1010)
    The statement is always generated like this, but with changing user_id for different users. The plan ID is always the same, no matter what the user_id is.
    Is it possible to have the queries fast from the beginning / 2nd run of the query?
    Is is only happening on 11.2 XE and will be gone in 11.2 SE1?
    Thank you & Best regards,
    Blama

    Hello,
    thanks for pointing me in the right direction. I read http://docs.oracle.com/cd/E18283_01/server.112/e16638/optimops.htm and start to understand the feature.
    From the examples there and in http://www.oracle.com/technetwork/articles/sql/11g-sqlplanmanagement-101938.html I'd expect different behaviour than I get. Please see this example:
    I'll use 3 SQLs in the example. These are:
    1:
    SELECT sql_id, plan_hash_value, CHILD_NUMBER, EXECUTIONS, BUFFER_GETS, IS_BIND_SENSITIVE AS "BIND_SENSI",
           IS_BIND_AWARE AS "BIND_AWARE", IS_SHAREABLE AS "BIND_SHARE"
    FROM   V$SQL x
    WHERE  SQL_TEXT LIKE 'SELECT /*mytest*/%';
    2:
    SELECT /*mytest*/ * FROM V_LEAD_MATCHED_UNDISMISSED WHERE user_id = 1010;
    3:
    SELECT /*mytest*/ * FROM V_LEAD_MATCHED_UNDISMISSED WHERE user_id = 1011;
    The test:
    SQL1: no rows returned.
    SQL2: 2 sec, 2 rows (expected).
    SQL1:
    SQL_ID        PLAN_HASH_VALUE CHILD_NUMBER EXECUTIONS BUFFER_GETS BIND_SENSI BIND_AWARE BIND_SHARE
    gz8m7d1k2ha27      3045614858            0          1         293 N          N          Y       
    SQL2: 3 secs, 2 rows.
    SQL1:
    SQL_ID        PLAN_HASH_VALUE CHILD_NUMBER EXECUTIONS BUFFER_GETS BIND_SENSI BIND_AWARE BIND_SHARE
    gz8m7d1k2ha27      3045614858            0          1         293 N          N          N        
    gz8m7d1k2ha27      1139782015            1          1         293 N          N          Y       
    SQL2: 3 secs, 2 rows.
    SQL1:
    SQL_ID        PLAN_HASH_VALUE CHILD_NUMBER EXECUTIONS BUFFER_GETS BIND_SENSI BIND_AWARE BIND_SHARE
    gz8m7d1k2ha27      3045614858            0          1         293 N          N          N        
    gz8m7d1k2ha27      1139782015            1          1         293 N          N          N        
    gz8m7d1k2ha27      1139782015            2          1         293 N          N          Y
    SQL2: 54 millisecs(!), 2 rows.
    SQL1:
    SQL_ID        PLAN_HASH_VALUE CHILD_NUMBER EXECUTIONS BUFFER_GETS BIND_SENSI BIND_AWARE BIND_SHARE
    gz8m7d1k2ha27      3045614858            0          1         293 N          N          N        
    gz8m7d1k2ha27      1139782015            1          1         293 N          N          N        
    gz8m7d1k2ha27      1139782015            2          2         377 N          N          Y
    SQL2: 65 millisecs(!), 2 rows.
    SQL1:
    SQL_ID        PLAN_HASH_VALUE CHILD_NUMBER EXECUTIONS BUFFER_GETS BIND_SENSI BIND_AWARE BIND_SHARE
    gz8m7d1k2ha27      3045614858            0          1         293 N          N          N        
    gz8m7d1k2ha27      1139782015            1          1         293 N          N          N        
    gz8m7d1k2ha27      1139782015            2          3         461 N          N          Y
    Now everytime the statement is issued, it would use the fast child_number=2.
    I change the SQL to the one with the new user_id:
    SQL3: 2 secs, 2 rows (expected).
    SQL1:
    SQL_ID        PLAN_HASH_VALUE CHILD_NUMBER EXECUTIONS BUFFER_GETS BIND_SENSI BIND_AWARE BIND_SHARE
    gz8m7d1k2ha27      3045614858            0          1         293 N          N          N        
    gz8m7d1k2ha27      1139782015            1          1         293 N          N          N        
    gz8m7d1k2ha27      1139782015            2          3         461 N          N          Y        
    0psg36f5rf8q2      3045614858            0          1         293 N          N          Y    
    SQL3: 2 secs, 2 rows.
    SQL1:
    SQL_ID        PLAN_HASH_VALUE CHILD_NUMBER EXECUTIONS BUFFER_GETS BIND_SENSI BIND_AWARE BIND_SHARE
    gz8m7d1k2ha27      3045614858            0          1         293 N          N          N        
    gz8m7d1k2ha27      1139782015            1          1         293 N          N          N        
    gz8m7d1k2ha27      1139782015            2          3         461 N          N          Y        
    0psg36f5rf8q2      3045614858            0          1         293 N          N          N        
    0psg36f5rf8q2      1139782015            1          1         293 N          N          Y
    SQL3: 2 secs, 2 rows.
    SQL1:
    SQL_ID        PLAN_HASH_VALUE CHILD_NUMBER EXECUTIONS BUFFER_GETS BIND_SENSI BIND_AWARE BIND_SHARE
    gz8m7d1k2ha27      3045614858            0          1         293 N          N          N        
    gz8m7d1k2ha27      1139782015            1          1         293 N          N          N        
    gz8m7d1k2ha27      1139782015            2          3         461 N          N          Y        
    0psg36f5rf8q2      3045614858            0          1         293 N          N          N        
    0psg36f5rf8q2      1139782015            1          1         293 N          N          N        
    0psg36f5rf8q2      1139782015            2          1         293 N          N          Y 
    SQL3: 55 millisecs(!), 2 rows.
    SQL1:
    SQL_ID        PLAN_HASH_VALUE CHILD_NUMBER EXECUTIONS BUFFER_GETS BIND_SENSI BIND_AWARE BIND_SHARE
    gz8m7d1k2ha27      3045614858            0          1         293 N          N          N        
    gz8m7d1k2ha27      1139782015            1          1         293 N          N          N        
    gz8m7d1k2ha27      1139782015            2          3         461 N          N          Y        
    0psg36f5rf8q2      3045614858            0          1         293 N          N          N        
    0psg36f5rf8q2      1139782015            1          1         293 N          N          N        
    0psg36f5rf8q2      1139782015            2          2         377 N          N          Y
    SQL3: 56 millisecs(!), 2 rows.
    SQL1:
    SQL_ID        PLAN_HASH_VALUE CHILD_NUMBER EXECUTIONS BUFFER_GETS BIND_SENSI BIND_AWARE BIND_SHARE
    gz8m7d1k2ha27      3045614858            0          1         293 N          N          N        
    gz8m7d1k2ha27      1139782015            1          1         293 N          N          N        
    gz8m7d1k2ha27      1139782015            2          3         461 N          N          Y        
    0psg36f5rf8q2      3045614858            0          1         293 N          N          N        
    0psg36f5rf8q2      1139782015            1          1         293 N          N          N        
    0psg36f5rf8q2      1139782015            2          3         461 N          N          Y
    What I notice: The plan_hash_value is the same for each of the 1st executions of a SQL. This is expected, as only the bind parameter("user_id" in the SQL text) differs.
    I wonder if the rest is also OK the way it is. In production the SQL will be called with many different user_ids. Will it always take the steps of generating these three rows in v$sql?
    Is there anything I can do to make the calls 1-3 to every of the different user_id-statements faster? I'm pretty sure that the time it takes is related to query parsing and plan building, but please correct me if I'm wrong.
    Thanks & Best regards,
    Blama

  • Repeats for work items with temporary errors for background WI

    Hi!
    I have a task that generates a temporary error method in order to activate the Repeats for work items with temporary errors. It currently has 5 repeats at intervals of 2 min. After running above the WI-status is 'Error'. When Restart by SWPR runs only once and sends the message 'Work item set to error following 05 failed attempts'.
    Please can tell me any way that when run again relaunch fixed repetitions.
    I welcome your comments.
    Thanks.
    Regards,
    Raisme Wilhelm

    Hi Raisme,
    Please show me since it is the design of the workflow. It is probable that you are inside a repetitive sequence that send several work items.
    Best Regards
    Victor Enzo
    Edited by: Victor Enzo Montenegro on Feb 23, 2011 3:34 PM

  • Double Click Event not fired for IE 11 with compatibility mode for HTML elemnt table

    Hi,
    I am facing an issue with Double click event(not getting fired on double click of mouse) for IE 11 with compatibility mode on HTML element "table" for Windows 7 64 bit machine.It runs fine on IE 10 with compatible mode with Windws 7 64  bit
    machine -- double click event.
    Can you please help to resolve the issue?
    Thanks and Regards,
    Yogesh

    Hi,
    f12>Debug tab, click the 'start' button.....(select break on all exceptions from the dropdown on the Debug tab)
    click your table and correct any errors that are listed in the Console of the developer tool.... probably you are using attachEvent I/o addEventListener. You should be using addEventListener for IE9 and higher and other web browsers.(it the w3c standard
    (recommendation).
    Post questions about html, css and scripting for website developers to the MSDN IE Web Development forum. Include with your question a link to your website or a mashup that shows the issue.
    Rob^_^

Maybe you are looking for

  • 3rd Gen 15g isnt recognized by updater, and unknown pic on ipod HELP!

    The ipod updater is not recognizing the ipod. Even though it is plugged in and it says do not disconnect. The ipod it self, shows a picture of the end of a pin connector with an arrow on it, and it wont go away. Please help! The ipod was working unti

  • Enterprise Services, Service Operations and Functional Business Component

    Hi, in the Enterprise Service desing guide (see https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/40c05360-0501-0010-63bd-aedd415b6e05 ) the relations between Enterprise Services and Service Operations are covered. I understand

  • Need Help with Sound Recording

    Hi, I'm a Comp. Sys. Eng. student in my final year and I've been trying to record sound with java for a couple of months now and its driving me stupid. I've looked around on just about every java source site I could find and have found many similar e

  • How many Objects are created in this?

    Hi, How many objects are created in the flwg code: String str = new String("Hello"); String str2 = "Hey there"; str = str + str2;

  • MD5 Password Support in DS5.2

    I need to import accounts form an Oracle Directory (OID) to the Sun Directory 5.2p6. The passwords in the accounts use MD5. From what I read, MD5 is supported, but I am not finding how to configure it to work :). I do see a plugin called:NS-MTA-MD5,