Index help

Hi experts,
I have a select query :
    SELECT oguid oobject_id oprocess_type aPRIORITY
        FROM crmd_partner AS p
          INNER JOIN crmd_link AS l ON lguid_set = pguid
          INNER JOIN crmd_orderadm_h AS o ON oguid = lguid_hi
          INNER JOIN crmd_activity_h AS a ON oguid = aguid
          INNER JOIN crmd_customer_h AS c ON oguid = cguid
        INTO TABLE t_oidlist_rg
        WHERE
          p~partner_no = me->gv_responsible_group AND
          l~objtype_hi = '05' AND
          l~objtype_set = '07' AND
          o~process_type IN lr_object_type  AND
          o~description_uc LIKE lv_description AND
          o~object_type = me->buscategory AND
         o~object_id IN lr_objectid AND " done by the first query
          a~priority IN lr_priority AND
          c~zzaf_date BETWEEN lv_incident_from AND lv_incident_to
I need to improve the performance. Please suggest how do i create secondary indexes since this query is giving a problem in production.
Regards,
S Sarma.

for creating secondry index on particular table
go to se11 and enter the ztable name .
in which go to indexs button or press (ctrl+F5) .
enter index name( 3 character on 3numeric or combination)
enter short description. after this press the table fields button .
select the fields which should be in the sec. index .press copy and check , activate the table.

Similar Messages

  • Will reorg of tables and reindex of indexes help ? And How?

    Suddenly users told that one of their applications has become very slow. They identified millions of history data also, which they deleted . But still it did not help. Should we go for reorg of tables and reindex of indexes? Will this at all help ? Will this help to gain space ? or It will help in speed ? How should I do it ?
    Reorg tables
    1. First take export dmp of all the tables of that particular schema.
    2. Then truncate the tables.
    3. Then import that export backup.
    reindex indexes
    1. Find out how much space is used by the indexes.
    2. Add that much of space in a different tablespace
    3. then move the indexes there.
    I am not sure about the steps which I missed. I need some advice from you.
    And how can I gain the space in the Unix server after this delete operation?

    With problems like this, I think that the first response should not be that we need to reorg tables/indexes. Steps are as follows.
    1)Ask users what portion of app is running slow? and how much slow.
    2)Also as others have suggested generate awr report or statspack or estat/bstat and see where time is being spent?
    3)Problem could be that execution plan of some sqls may have changed and they may be performing badly. This is the most likely reason. Other problem could be that there is some other contention in database or there is an I/O problem. All these you can find from these reports.

  • Indexing Help

    Hello,
    I have table whose current size is around 1 TB.Everyday We are performing DML operation on table with millions of records.Now I have couple of indexes created on that table.To make index get updated would it be possible that We can just rebuild index 
    on today's DML affected records instead of on whole table all records.

    Table Partitioning feature will help here to segregate data into different partitions (which can reside in different file groups). Another option is to have an archive DB to store historical data. Table partitioning will require least maintenance. Having
    archive DB will entail some maintenance.
    Table partitioning is a way to efficiently manage large tables. This feature lets you partition a table based on a certain datetime field (like transaction time) and this will help to switch out large amounts of data in a minute or less (you can switch a year's
    worth of data out to another table and/or delete it in less than a minute as it is a metadata operation).
    Table partitioning feature will help even if you go with creating separate tables for each year - you can switch out the data faster from the main table without blocking production activities.  If you wish to reduce the size of the current DB, you can
    still use this feature to switch the yearly data onto to a new table within the same DB and then have a job move the new table to the archive DB without affecting production activities
    In addition, you will see a performance improvement for all queries that are filtered on this partitioned column (more so if the partitions are stored on different volumes)
    Satish Kartan http://www.sqlfood.com/

  • Syncing source indexes help

    hello
    i am trying to use MacPorts in order to use the kinect to do interactive musical applications (http://openkinect.org/wiki/Getting_Started#OS_X).
    After installing the software, I have to run a number of lines of code in the terminal but I get the above-mentioonned response: "have you synced your source indexes
    i am new to all this, sorry. Thanks for your help.

    Yes, this is the issue.
    Although the iPhone supports manually managing music and videos, the iPhone does not support disk mode as is supported with the iPod. You can sync music and videos, or manually manage music and videos - you can do one or the other, not both at the same time. If you switch from one to the other, all iTunes content on your iPhone will be erased first, and I believe the only way to transfer ringtones is by syncing.
    Since the iPhone does not support disk mode, syncing iTunes content provides more flexibility IMO. If you are manually managing music and videos and you restore your iPhone with iTunes if wanted or needed, you must remember which songs and video that you manually transferred to your iPhone. If you make use of an iTunes playlist or multiple playlists for selected music and sync, the playlist or playlists selected under the Music tab for your iPhone sync preferences will be re-transferred to your iPhone without having to remember which music was manually transferred to your iPhone followed by manually re-transfering the music.
    To add additional music to your iPhone, add the music to a playlist in iTunes selected to be transferred to your iPhone followed by a sync. To remove a song from your iPhone, remove the song from the playlist in iTunes selected to be transferred to your iPhone followed by a sync.
    With sync only checked songs and videos selected under the Summary tab for your iPhone sync preferences when syncing music and videos, you can uncheck a song in an iTunes playlist selected to be transferred to your iPhone followed by a sync to remove the song from your iPhone without removing the song from the playlist in iTunes.

  • Lost Domain & html Index--help

    Is there any way to restore the SITE INDEX file manually (recreate it) when the site DOMAIN in iWeb has been deleted and the index file is not available (deleted from the ftp server-- not .Mac)? and the site doesn't come up in iWeb any longer.
    I can download the folders & index pages for the site as it was built in iWeb ('06), but without the main INDEX the site is no longer visible on the web.
    Can I import the files from the server back into iWeb and have it rebuilt?? or am I completely out of luck without the Domain that created the original site?
    At the very least-- if I can create the MAIN SITE INDEX and upload it-- the site would be visible on the web again. Suggestions?? Possible??
    Please don't ask how this happened....it's a really long story that has had many lessons involved.
    Thanks-

    Doesn't anyone make some import software or something for iWeb?
    No. If it could be easily done, such software would have been made available long ago, since iWeb has been in used for nearly 2 years now. If you want the special features offered by iWeb, you need to develop the habit of regularly backing up your Domain file. If that is a major drawback for you, there are plenty of alternative web editors you can use:
    http://www.pure-mac.com/webed.html

  • Is index helpful for "group by"?

    In an application a statement is executed:
    SELECT a b c d MAX(e) SUM(f) FROM my_db_tab
    GROUP BY a b c d.
    Does it make sense to add an index to my_db_tab for the fields a, b, c and d?
    Context: The ABAP is generated so I can not change it. my_db_tab contains approx. 8.000.000 records. The run-time for this command is about 20 minutes. my_db_table is now searched sequentially.

    hmmm,  your DB table has 8.000.000 entries.
    What do you want to select, without where-condition you will select all records, this seems quite impossible to me as no internal table will be able to hold 8 Mio records (if a b c and d are not only one character fields).
    => Is there a where condition? Usually the where condition would determine the index used. The group by does not influence it much.
    => No where, despite the huge number, what is it good for? Group by is used for display, who wants to see 8.000.000 records?
    Anyway, if you read no 8.000.000 but still many, then a full table scan is better than an index scan. The database access each table block once with full table scan. If it follows the index it will come to each block several times, and for a large table, not all blocks will stay in the cache, so blocks are loaded several times, which makes the index access even slower than the full table scan.
    => no index
    Siegfried

  • Will BW indexing help Webi reports performance

    Hi Experts,
    Environment
    BO Version:4.1 SP3
    Reporting Database:BW 7.4 sp3
    Recently we have applied the OSS note :. 0001898395 so that we enabled index on the master data.
    We have applied the note and had the index on the 0profit_ctr in the Infoobject: 0mat_plant.
    Reports are fast in BEX side but there is no difference in webi reports performance.
    Our opinion is if BEx performance is increased obviously WEbi performance also should increase.
    Please post your experiences..
    Thank you,,

    Several things will affect the report speed. Note that there are two halves to it - the data source fetch time and the report render time.
    Data source fetch time is the time that it takes for all the data to be returned. BO will always look slower than the native tool because the native tool starts spooling out results as soon as it retrieves the first one - the more data returned, the slower BO will look by comparison.
    Report render time will depend upon the number and complexity of variables as well as the number of rows returned.
    Reducing the volume of data with correct aggregations and filters rather than summing and filtering at report level are the two best ways to improve performance without database/universe access.

  • SAP MDM Indexer help

    Hello,
    I am working on Catalog Publishing in SAP MDM.
    Now, I have SAP MDM Indexer 7.1  installed on my machine. I do not have Adobe Indesign license.
    I want to gain knowledge on the MDM Indexer features and want to do somePOC.
    Please share your knowledge, also please provide some link where I can get relevant material.
    Thanks and Regards,
    Sameer.

    Hello,
    The method of working with mdm indexer is explained in the How to Guide:
    "How to Create Publications with SAP NetWeaver MDM Using MDM Publisher"
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/90f05885-162c-2c10-fba6-fffae9623388
    Regards,
    Hedda

  • Mail is stuck trying to index, help!

    mail is trying to index itself and i cant stop it. I sent a 7mb file which has somehow made mail get its knickers all up in a bunch! the spinning progress cog on the right hand side of the window just keeps going because its trying to index and find this file, but it isnt making any progress (i tried to delete this file but that doesnt work either). I also left it for about an hour but it didnt get anywhere, and tried to stop it in the progress window, but it didnt stop... If anyone knows how i can rebuild without starting mail that would be good, or how to go in to the mail library and get this file to remove it perhaps? (cant see anything there) or any other ideas?
    it still sends and receives but its using so much cpu that the computer has slowed to a standstill.
    thanks
    whyn
    dual usb ibook   Mac OS X (10.3.9)  

    You're welcome.
    Quit Mail first and take the Sent Messages.mbox back out to the Desktop.
    Copy the Sent Messages.mbox on the Desktop and place the copy in another location of your choosing such as your Documents folder for backup purposes.
    Repeat Show Package Contents for the original Sent Messages.mbox on the Desktop and delete all package content files except for the mbox file at 161.5mb in size.
    With the Mail.app quit and using the Finder, go to Home > Library > Mail > this POP account named folder. If you have not sent any messages with this account since moving the old Sent Messages.mbox to the Desktop, a new Sent Messages.mbox should not have been created yet within the account named folder. Move the Sent Messages.mbox from the Desktop within the account named folder.
    Launch Mail and select the account's Sent mailbox in the mailboxes drawer. The Mail.app will take additional time to re-index the mailbox. Allow this process to complete without doing anything else.
    If successful and when completed, all messages in the message list for this mailbox will appear as unread even though these are sent messages. To mark all messages as read, select a message in the message list and at the menu bar, go to Edit > Select All. At the menu bar, go to Message > Mark and select As Read.
    Use the Rebuild Mailbox function on this mailbox for good measure.
    If this resolves the problem and there are no missing messages, you can delete the backup copy of the Sent Messages.mbox.
    A Jaguar or Panther mailbox should have a single content_index file which shouldn't be very large in size. The multiple content_index files for this mailbox are the cause of the problem.
    Be sure to use the Rebuild Mailbox function on the most active mailboxes on a regular basis. I use it once a month or so depending on activity and don't allow an individual mailbox to approach or exceed 1GB in size.

  • A little Z index help please.

    I have a swf flipbook which obscures my menus. I know I have to change the wmode but can't seem to do it. I have made several attempts to no avail and my menus are still covered by the swf.
    I know it's been discussed ( I think I started the topic a while back) but I have had no luck.
    Here is my iframe:
    <iframe  style="width:1200px;height:600px"  src="http://www.rogersphoto.com/albums/71011/book.swf"  seamless="seamless" scrolling="no" frameborder="0" allowtransparency="true"></iframe>
    I tried this:
    <iframe  style="width:1200px;height:600px"  src="http://www.rogersphoto.com/albums/71011/book.swf?wmode=transparent"  seamless="seamless" scrolling="no" frameborder="0" allowtransparency="true"></iframe>
    and this:
    <iframe  style="width:1200px;height:600px"  src="http://www.rogersphoto.com/albums/71011/book.swf&wmode=transparent"  seamless="seamless" scrolling="no" frameborder="0" allowtransparency="true"></iframe>
    This must be the wong spot?
    Can anybody post the correct spot.
    Thanks.

    I haven't stared at your code, but why not move the logic in your main into a method, for starters?
    public void run() {
              System.out.print("Please enter your sentence: ");
            inputLine = scanner.nextLine();
            st.reverse(inputLine);
            System.out.println("Stack2: " + stack2);
            System.out.println("Stack: " + stack);
            System.out.println("Please enter the search string: ");
            searchString = scanner.nextLine();
            char firstSearchLetter = searchString.charAt(0);
            char lastSearchLetter = searchString.charAt(searchString.length() - 1);
            System.out.println("first letter: " + firstSearchLetter);
            System.out.println("last letter: " + lastSearchLetter);
            System.out.println("junkStack: " + junkStack);
            System.out.println("stack: " + stack);
            System.out.println("junk in between: " + junk);
    public static void main(String[] args) {
        StackTest st = new StackTest();
        st.run();
    }Message was edited by:
    BigDaddyLoveHandles

  • Index, Hints, etc

    All,
    I was wondering whether you could please help out on the following points:
    (a) In the query below, what might be the suitable (and why?) indexes to be built?
    =======================================
    SELECT CONTRACT_PAY_GROUPS.CPG_AMOUNT_CY_AP,
    CONTRACT_PAY_GROUPS.CPG_AMOUNT_EU_AP,
    CONTRACT_PAY_GROUPS.CPG_TOTAL_ELIGIBLE_AREA
    FROM CONTRACT JOIN CONTRACT_PAY_GROUPS ON CONTRACT.AC_SEQ = CONTRACT_PAY_GROUPS.CPG_CON_SEQ
    WHERE CONTRACT.AC_START_YEAR = 2005
    AND CONTRACT.AC_APPLICANT_NUMBER = '50614'
    AND CONTRACT_PAY_GROUPS.CPG_SCHEME = 'Ε'
    AND CONTRACT.AC_STATUS <> 2
    =======================================
    (b) Can the query below be (re-)written in any better way, in terms of performance (that is, which ---type of--- indexes and/or hints might be important to be created)?
    =======================================
    SELECT LYALLPLOTS.CP_END_YEAR
    FROM (SELECT TYPLOTS.CP_PE_PLOT_ID FROM (SELECT CP_PE_PLOT_ID,
    CP_PE_PT_SEQ,
    AC_APPLICANT_NUMBER,
    AC_START_YEAR,
    AC_END_YEAR,
    AC_STATUS,
    AC_TYPE
    FROM CONTRACT JOIN CONTRACT_PLOT ON CONTRACT.AC_SEQ = CONTRACT_PLOT.CP_CON_SEQ
    JOIN APPLICATIONS ON CONTRACT.AC_APPL_SEQ = APPLICATIONS.APPL_SEQ
    WHERE APPLICATIONS.APPL_YEAR = 2005
    AND CP_PE_PT_SEQ IS NOT NULL ) TYPLOTS JOIN
    (SELECT PT_SEQ, PT_PLOT_ID, PT_FROM_APPL_NUM, PT_TO_APPL_NUM FROM PLOTS_TRANSFER) TRPLOTS
    ON TYPLOTS.CP_PE_PT_SEQ = TRPLOTS.PT_SEQ) TYTRANSFERS
    JOIN (SELECT CP_PE_PLOT_ID, AC_APPLICANT_NUMBER, CONTRACT_PLOT.CP_END_YEAR FROM CONTRACT_PLOT JOIN CONTRACT
    ON CONTRACT_PLOT.CP_CON_SEQ = CONTRACT.AC_SEQ
    WHERE CONTRACT.AC_START_YEAR = 2004) LYALLPLOTS ON TYTRANSFERS.CP_PE_PLOT_ID = LYALLPLOTS.CP_PE_PLOT_ID
    WHERE TYTRANSFERS.CP_PE_PLOT_ID = '5101-53/16--143'
    GROUP BY LYALLPLOTS.CP_END_YEAR;
    =======================================
    (c) In general, could you please provide any insight on which types of indexes and/or hints (UNNEST, HASH_AJ, etc) might be needed in the case we are joining tables?
    Any clue/support will be greatly appreciated. Thank you,
    -Pericles Antoniades.

    I'll echo sven's advice about not using hints unless you need to. I'm not telling you not to use hints. I am telling you to explore other solutions first, for the reasons sven mentioned.
    An indexing strategy can be tricky to come up with. There are guidelines to follow (which follow), but also exceptions to those guidelines. Your own observations may differ from I'm going to describe. Also, I only glanced at your posting and possibly missed something. I'm going to describe btree indexes. Bitmap indexes work a bit differently.
    Generally, indexes help do two things: get back small amounts of data quickly, and enforce uniqueness. Small amounts of data is sometimes considered to be between 15-20% of the rows in a table in 9i/10g; more than this and you might be better off with direct table access. Indexed lookups work well with nested loops joins (this is where the execution plan becomes useful) or direct table access, while other join methods (usually hash joins) may be more efficient when joining most of the rows from two tables.
    What columns should you index? That's a matter of some debate. You can search OTN for other ideas. The best candidates for indexing are columns used in your WHERE clauses and/or those that make up a unique key. In your example
    ON CONTRACT.AC_SEQ = CONTRACT_PAY_GROUPS.CPG_CON_SEQ WHERE CONTRACT.AC_START_YEAR = 2005 AND CONTRACT.AC_APPLICANT_NUMBER = '50614' AND CONTRACT_PAY_GROUPS.CPG_SCHEME = 'Ε' AND CONTRACT.AC_STATUS <> 2
    I personally get the best results by listing the indexes in the order of most to last restrictive, but that's a matter of some debate.
    I would consider putting indexes on contract.ac_seq, contract.ac_start_year contract.ac_applicant_number, and contrct.ac_status (composite), as well as contract_pay_groups.cpg_scheme and cpg_con_seq (composite). Then I would check to see if they were being used from an execution plan, ultimately using timings and run statistics from SQL*PLUS AUTOTRACE to decide if they were helping.

  • Trying to understand PARALLEL content index

    After browsing the oracle documentation i have the following understanding
    When we use the "PARALLEL n" clause while creating an index, the index creation is done in parallel as said https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=HOWTO&id=1290549.1. This is not related in any way in improving performance of the query that uses that index. Right?
    Local partitioned index (as discussed http://docs.oracle.com/cd/B28359_01/text.111/b28303/aoptim.htm#i1006756) needs the table to be partitioned if the search query is to perform better. Right?
    Also specifying memory during create index helps during index creation. Does it help in making the search query using the CONTAINS clause perform better?
    Edited by: user3797564 on Jan 3, 2012 4:30 AM
    Edited by: user3797564 on Jan 3, 2012 4:37 AM

    When we use the "PARALLEL n" clause while creating an index, the index creation is done in parallel as said https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=HOWTO&id=1290549.1. This is not related in any way in improving performance of the query that uses that index. Right?
    Correct
    Local partitioned index (as discussed http://docs.oracle.com/cd/B28359_01/text.111/b28303/aoptim.htm#i1006756) needs the table to be partitioned if the search query is to perform better. Right?
    Yes, most of the times queries do support "parallel queries"
    Also specifying memory during create index helps during index creation. Does it help in making the search query using the CONTAINS clause perform better?
    Yes you are correct. It doesn't help in queries, it only helps index creation.

  • Group by in update and index

    Hi All,
    I was trying to run an update SQL statement as follows:
    update tableA a set a.foo =
    (select b.foo
    from another_schema.tableB b
    where b.bar = a.bar
    group by b.bar,b.foo);
    I have an index in tableA on the bar column (it is a number(10,0) type column), also in tableB there
    are no more then 3 rows per unique bar column value. The update was taking more then 10 minutes (tableA has around 208K rows and tableB around 212K rows). I went into Enterprise Manager and looked at the performance analysis and run the Oracle advisor that recommended to create an index on tableB containing two segments bar,foo. I added this index and the update took 13 seconds. Why did this index help if oracle does the join first it only has to group up to 3 rows at a time also notice the there are only around 1000 joined rows where such case happens. Can someone explain this ?
    thanks
    Tal

    Why do you use "group by" when you don't use an aggregate-function ?

  • Index on table columns that contain mixed upper- and lower- case values

    Hi,
    Can anyone tell if an indexing will benefit when column values are not consistent - some uppercase and some lowercase or mixed? The table contains 17 million rows. If index is created, it will concatenate 7 columns. When doing search, I need to have the WHERE clause like WHERE UPPER(LAST_NAME) LIKE UPPER(:P2_LAST_NAME_SEARCH) because values loaded into the table are not consistent. Does the following index help or not?
    CREATE INDEX IDX_TEST ON TABLE1(COL1, COL2, ...) TABLESPACE "INDEX";

    you can try Function Based Index for this
    cheers

  • Index on multiple columns issue Oracle 9i

    Hi,
    I have a couple of issues and would appreciate it if anyone can help:
    a). I have a table that has indexes on multiple columns, so for example:
    Index 1 on Org_Type, Org_Id and Effdt
    Index 2 on FICE_CD, Org_Id and Effdt etc.
    I have 9 such indexes and all of them hav Effdt in it.
    My question is if I query the table and always use a subquery to get the max Effdt rows form the table, will the above indexes help in my query or will I have to create another index just on Effdt so that my Query runs faster.
    b). I have a target table with more than 20 million rows in it. This table has an Effdt too and I would like to find the max effdted row in this table as well so that I can then use that date-time stamp in my ETL tool to do an incremental update from my source table each night. But since the amount of rows is huge, my query runs forever. I have a normal non-unique index on Effdt. Is there another way I can optimize this table? Currenlty I have resorted to getting all rows from source that are >= (sysdate -1), but I would prefer to get the max date-time stamp from the target table itself.
    Thanks,
    CJ

    Hi ,
    Thanks for the input. I just used the explain plan and found that there is a full table scan happening on the PS_EXT_ORG_TBL, so I guess I will have to create an index specifically on the effdt field. The sql is as follows:
    Select E.EMPLID, E.INSTITUTION, E.EXT_ORG_ID, E.EXT_CAREER, E.EXT_DATA_NBR, E.EXT_SUMM_TYPE, E.UNT_ATMP_TOTAL, E.UNT_COMP_TOTAL,
    E.CLASS_RANK, E.CLASS_SIZE, E.PERCENTILE, E.UM_GPA_EXCLUDE, E.UM_EXT_ORG_GPA, E.CONVERT_GPA,
    E.UM_EXT_ORG_CNV_CR, E.UM_EXT_ORG_CNV_GPA, E.UM_EXT_ORG_CNV_QP, E.UM_GPA_OVERRIDE, E.EXT_ACAD_LEVEL,
    F.FROM_DT, F.TO_DT, G.EXT_DEGREE_NBR, G.DEGREE, G.DESCR "DEGREE DESCR", G.DEGREE_DT, H.EFFDT,
    H.EFF_STATUS, H.SCHOOL_CODE, H.LS_SCHOOL_TYPE, H.ATP_CD, H.CITY, H.STATE, H.COUNTRY,
    H.DESCR "SCHOOL DESCR", H.PROPRIETORSHIP
    FROM PS_EXT_ACAD_SUM E, PS_EXT_ACAD_DATA F, PS_EXT_DEGREE G, PS_EXT_ORG_TBL H
    WHERE E.EXT_ORG_ID = F.EXT_ORG_ID AND E.EMPLID = F.EMPLID
    AND E.EXT_ORG_ID = G.EXT_ORG_ID AND E.EMPLID = G.EMPLID
    AND E.EXT_ORG_ID = H.EXT_ORG_ID
    AND H.EFFDT=(SELECT MAX(EFFDT) FROM PS_EXT_ORG_TBL H1
              WHERE H.EXT_ORG_ID=H1.EXT_ORG_ID
                        AND H.EFF_STATUS=H1.EFF_STATUS
                        AND H.SCHOOL_CODE=H1.SCHOOL_CODE
                        AND H.LS_SCHOOL_TYPE=H1.LS_SCHOOL_TYPE
                        AND H.ATP_CD=H1.ATP_CD
                        AND H.CITY=H1.CITY
                        AND H.STATE=H1.STATE )
    My source DB is a copy of the transactional database, so there is no danger of new rows coming in.

Maybe you are looking for

  • How to add Adobe Connet training to my ADOBE Meeting acount

    I use Adobe Connet meeting an account for  3 years now . I'm very pleased. I plan to use an Adobe Connet Training but want to keep all the users of my business meetings. I have many users from my account and I want to keep everyone. I have visited al

  • How to import an xdp-File to Web Dynpro Interactive Forms

    Hi Experts I am working with Web Dynpro and Interactive Form. With Adobe Lifecycle Designer I do an import of an MS Word-Document and convert it to the XDP-Format. How do I import this XDP-File into an Interactive Form inside Web Dynpro? The form sho

  • SSRS 2012 Sharepoint Mode ItemNotFoundException

    Hello, I have a big problem with the new SSRS 2012 (Sharepoint Mode) at one of my customers Sharepoint 2010 farm and I hope somebody can help. I get always the following error message if i try to open a report over the Report Viewer Webpart or try to

  • SQL 2012 AlwaysOn AG with 3 Nodes

    First, what would be the best option of quorum/quorum witness for my SQL 2012 AlwaysOn group? I have setup the following: SQL 2012 AlwaysOn with 3 nodes. 1 primary 2 secondaries - one of them is read-only (for reporting and backup) Dynamics CRM and R

  • Usage of for-each loop inside another for-each loop

    Hi All, I have tried using a for-each loop inside another for-each loop as given below. <?for-each:G_1?> <Customer Details> <?for-each:G_2?> <Address> <?end for-each?> <?end for-each?> Its not getting inside the second loop.I have referred this link