OBJECT_ID for a PK or Index

Hi friends,
With a need for the first time to check if a PK and an Index exists before dropping and re-creating the same need your help.
IF OBJECT_ID(N'PK_LMT', N'PK') IS NOT NULL SELECT 1 (does not work and reflects Command(s) completed successfully)
IF OBJECT_ID('PK_LMT', 'PK') IS NOT NULL SELECT 1 (does not work and Command(s) completed successfully)
While:
SELECT * FROM SYS.objects WHERE NAME = 'PK_LMT' (does reflect the same)
Similarly how to check if an index (named IX_LGUF_DT_CUST) exists? Is it only to check from sys.indexes?
Thanks for all your help.

Please check if your SQL Login has enough permission. Moreover, please use OBJECT_ID() function without specifying the object type. Also, please specify the database name like this:
SELECT OBJECT_ID('AdventureWorks2014..PK_ErrorLog_ErrorLogID');
If it is not in the default schema, please type the full name. 
More info: OBJECT_ID (Transact-SQL)
About the second question, as Erland noted we can use sys.indexes, but we have to filter it based on the specific table like this:
SELECT *
FROM sys.indexes
WHERE name = 'IX_LGUF_DT_CUST'
AND object_id = OBJECT_ID('TableName');
Cheers,
Saeid Hasani
Database Consultant
Please feel free to contact me at [email protected] as well as on Twitter and Facebook.
[My Writings on TechNet Wiki] [T-SQL Blog] [Curah!]
[Twitter] [Facebook] [Email]

Similar Messages

  • What the Initial value for sy-tabix & sy-index

    Hi Folks
       I have a small doubt.
    What the Initial value for sy-tabix & sy-index?
    Can anyone please clarify me?
    Regards,
    Sree

    hi sree,
    both values are initialized to 0 before processing and after processing values are changed according to used scenarios.
    if helpful reward some points.
    with regards,
    suresh babu aluri.

  • ABAP Routine for Deleting and creating index for ODS in Process chains

    Any pointers for the ABAP Routine code for deleting and creating index for ODS in Process chains.

    Hi Sachin,
    find the following ABAP code to delete ODS ondex.
    data : v_ods type RSDODSOBJECT.
    move 'ODSname' to v_ods .
    CALL FUNCTION 'RSSM_PROCESS_ODS_DROP_INDEXES'
      EXPORTING
        I_ODS = v_ods.
    To create index:
    data : v_ods type RSDODSOBJECT.
    move 'ODSname' to v_ods .
    CALL FUNCTION 'RSSM_PROCESS_ODS_CREA_INDEXES'
      EXPORTING
        I_ODS = v_ods.
    hope it helps....
    regards,
    Raju

  • 'Incorrect value type for array element in index N'

    Please help.  Preflight is flagging multiple 'Incorrect value type for array element in index N" > Required key /F is missing >Array element at index 0 is of wrong type' messages for pages throughout my text document with embedded navigation and annotations.What does it mean? Is there anything I can do?
    The origins of the file are from a selection of emails taken from Mail and gmail, and letters probably written on pc (Office) and converted to pdf...
    I'm on Mac Mavericks.
    Thanks for reading.

    What problem are you trying to solve with preflighting? What are you checking for?

  • How does select stmt with for all entries uses Indexes

    Hello all,
    I goes through a number of documents but still confused how does select for all entries uses indexes if fields are not in sequences. i got pretty much the same results if i take like two cases on Hr tables HRP1000 and HRP1001(with for all entries based upon hrp1000). Here is the sequence of index fields on hrp1001 (MANDT, OTYPE, OBJID, PLVAR, RSIGN, RELAT, ISTAT, PRIOX, BEGDA, ENDDA, VARYF, SEQNR). in second case objid field is in sequence as in defined Index but i dont see significant increase in field even though the number of records are around 30000. My question is does it make a differrence to use field sequence (same as in table indexes) in comparison to redundant field sequence (not same as defined in table indexes), secondly how we can ge tto know if table index is used in Select for entries query i tried Explain in ST05 but its not clear if it uses any index at all in hrp1001 read.
    here is the sample code i use to get test results.
    test case 1
    REPORT  zdemo_perf_select.
    DATA: it_hrp1000 TYPE STANDARD TABLE OF hrp1000 WITH HEADER LINE.
    DATA: it_hrp1001 TYPE STANDARD TABLE OF hrp1001 WITH HEADER LINE.
    DATA: it_hrp1007 TYPE STANDARD TABLE OF hrp1007 WITH HEADER LINE.
    DATA: it_pa0000 TYPE STANDARD TABLE OF pa0000 WITH HEADER LINE.
    DATA: it_pa0001 TYPE STANDARD TABLE OF pa0001 WITH HEADER LINE.
    DATA: it_pa0002 TYPE STANDARD TABLE OF pa0002 WITH HEADER LINE.
    DATA: it_pa0105_10 TYPE STANDARD TABLE OF pa0105 WITH HEADER LINE.
    DATA: it_pa0105_20 TYPE STANDARD TABLE OF pa0105 WITH HEADER LINE.
    DATA: t1 TYPE timestampl,
          t2 TYPE timestampl,
          t3 TYPE timestampl 
    SELECT * FROM hrp1000 CLIENT SPECIFIED  INTO TABLE it_hrp1000 bypassing buffer
                WHERE mandt EQ sy-mandt AND
                      plvar EQ '01' AND
                      otype EQ 'S'AND
                      istat EQ '1' AND
                      begda <= sy-datum AND
                      endda >= sy-datum AND
                      langu EQ 'EN'.
    GET TIME STAMP FIELD t1.
    SELECT * FROM hrp1001 CLIENT SPECIFIED INTO TABLE it_hrp1001 bypassing buffer
                FOR ALL ENTRIES IN it_hrp1000
                 WHERE mandt EQ sy-mandt AND
                        otype EQ 'S' AND
    *                    objid EQ it_hrp1000-objid and
                        plvar EQ '01' AND
                        rsign EQ 'B' AND
                        relat EQ '007' AND
                        istat EQ '1' AND
                        begda LT sy-datum AND
                        endda GT sy-datum and
                        sclas EQ 'C' and
                        objid EQ it_hrp1000-objid.
    *                    %_hints mssqlnt 'INDEX(HRP1001~0)'.
    *delete it_hrp1001 where sclas ne 'C'.
    GET TIME STAMP FIELD t2.
    t3 = t1 - t2.
    WRITE: 'Time taken - ', t3.
    test case 2
    REPORT  zdemo_perf_select.
    DATA: it_hrp1000 TYPE STANDARD TABLE OF hrp1000 WITH HEADER LINE.
    DATA: it_hrp1001 TYPE STANDARD TABLE OF hrp1001 WITH HEADER LINE.
    DATA: it_hrp1007 TYPE STANDARD TABLE OF hrp1007 WITH HEADER LINE.
    DATA: it_pa0000 TYPE STANDARD TABLE OF pa0000 WITH HEADER LINE.
    DATA: it_pa0001 TYPE STANDARD TABLE OF pa0001 WITH HEADER LINE.
    DATA: it_pa0002 TYPE STANDARD TABLE OF pa0002 WITH HEADER LINE.
    DATA: it_pa0105_10 TYPE STANDARD TABLE OF pa0105 WITH HEADER LINE.
    DATA: it_pa0105_20 TYPE STANDARD TABLE OF pa0105 WITH HEADER LINE.
    DATA: t1 TYPE timestampl,
          t2 TYPE timestampl,
          t3 TYPE timestampl 
    SELECT * FROM hrp1000 CLIENT SPECIFIED  INTO TABLE it_hrp1000 bypassing buffer
                WHERE mandt EQ sy-mandt AND
                      plvar EQ '01' AND
                      otype EQ 'S'AND
                      istat EQ '1' AND
                      begda <= sy-datum AND
                      endda >= sy-datum AND
                      langu EQ 'EN'.
    GET TIME STAMP FIELD t1.
    SELECT * FROM hrp1001 CLIENT SPECIFIED INTO TABLE it_hrp1001 bypassing buffer
                FOR ALL ENTRIES IN it_hrp1000
                 WHERE mandt EQ sy-mandt AND
                        otype EQ 'S' AND
                        objid EQ it_hrp1000-objid and
                        plvar EQ '01' AND
                        rsign EQ 'B' AND
                        relat EQ '007' AND
                        istat EQ '1' AND
                        begda LT sy-datum AND
                        endda GT sy-datum and
                        sclas EQ 'C'." and
    *                    objid EQ it_hrp1000-objid.
    *                    %_hints mssqlnt 'INDEX(HRP1001~0)'.
    *delete it_hrp1001 where sclas ne 'C'.
    GET TIME STAMP FIELD t2.
    t3 = t1 - t2.
    WRITE: 'Time taken - ', t3.

    Mani wrote:
    Thank you for your answer, its very helpful but i am still nor sure how does parameter rsdb/max_blocking_factor affect records size.
    Hi,
    The blocking affects the size of the statement and the memory structures for returning the result.
    So if your itab has 500 rows and your blocking is 5, the very same statement will be executed 100 times.
    Nothing good or bad about this so far.
    Assume, your average result for an inlist 5 statement is 25 records with an average size of 109 bytes.
    You average result size will be 2725 byte plus overhead which will nearly perfectly fit into two 1500 byte ethernet frames.
    Nothing to do in this case.
    Assume your average result for an inlist 5 statement is 7 records with an average size of 67 bytes.
    You average result size will be ~ 470 byte plus overhead which will only fill 1/3 of a 1500 byte ethernet frame.
    In this case, setting the blocking to 12 ... 15 will give you 66% network transfer performance gain,
    and reduces the number of calls to the DB by 50%, giving additional benefit.
    Now this is an extreme example. The longer the average row length is, the lower will be the average loss in the network.
    You have the same effects in memory structures, but on that layer you are fighting single micro seconds instead of
    hundreds of these, so in real life it is rarely measurable.
    Depending on table-statistics, oracle might decide for short inlists to use a concatanation instead of an inlist.
    This is supposed to be more costy, but I never had a case where I could proove a big difference.
    Values from 5 to 15 for blocking seem to be ok for me. If you have special statements in customer coding,
    it #might# be benefitial to do the mentioned calculations and do some network tracing to see if you can squeeze your
    network efficiency by tuning the blocking.
    If you have jumbo frames enabled, it might be worth to be analyzed as well.
    If you are only on a DB-CI system that is loopback connected to the DB, I doubt there might be a big outcome.
    Hope this helps
    Volker

  • My observations ( for people that have indexing and connection issues )

    Since cisco takes an awful lot of time doing anything about these problems i decided to investigate the box myself today.
    I followed advice from various threads and did some digging aroudn with network traffic snoopers and some other tools.
    First ( and this is buried somewhere deep in the documentation ) you can NOT have ANY file or folder name that is longer than 32 characters. This is a limitation of the stupid linux they are running on ! If you have a file with longer name or foldername it may screw up the indexing.
    Second : file and path names can only have alphanumerical characters and numbers and spaces in them. DO NOT USE ANY other character. it messes up the indexing  I had folders named -= folder 1 =-. they never got indexed. as soon as i removed teh  -= and =- the indexing kicked in .... This is again a twonkyvision / loonix problem
    The box has tremendous problems with ghost devices on the network. I have on my network at home : 4 pc's running XP ( some pro some home edition )  2 laptops with Vista. One box with Win7. One Windows home Server. One linksys Skype phone , One Roku soundbridge, one Dlink DNS323 , one ADS network drive , one Simpletech Simplestor , A HP color laserjet , a Hp 6250 , 7650 and some other HP network printer. Thru the wireless link my PDA ( iPaq) and iPhone connect once in a while too. My blu ray player is also hooked up (netflix streaming). And then there are various experimental systems ( i am an electronics engineer and i have built some gadgets that are network connected that allow me to remotely turn on lights monitor temperature etc. )
    Now , the NMH does not detect correctly most of the devices. It keeps on trying to feed information to the printers ... it also tries feeding information to other NAS devices as well as to the Windows Homeserver ... It falsely identifies one of the printers as a dlink ethernet connected dvd player ...
    It also has problems with devices that use static ip addresses on the network ( i set up the printers and other NAS devices with hardcoded ip addresses. so no dhcp )
    So here is what i did : go to twonky configuration ( port 9000 see tony's article )
    Step 1 : Yank out the ethernet cable to ANY OTHER DEVICE except the pc you are working on , your router and the NMH <- this is important
    step 2 : Hit the button to erase all the devices it discovered.  (reset list)
    Step 3 : hit the SAVE button
    step 4: UNCHECK the box next to the 'reset list' button
    step 5: hit the SAVE button
    Now , on the left hand side click on Maintenance
    Click on ALL checkboxes under Log level. they should ALL be checked
    Hit the Clear Logs button
    Hit Save changes
    Hit RESTART server. you wil get a file not foound error page. you will see in your browsers titlebar that the url changes to an ip address with some text behind it. remove all that text and key in :9000 and hit return. it will take you back to tonky. ( i don't know how important this step is , but i did not go in throught the device name , i used the ip address from this point on. normally it should not matter but you never know ( i have a suspicion i will explain later )
    Write down this IP address. it is usefull to know.
    Now since we are back in wonkyvision ( stupid half baked program ) Go back to the maintenance screen.
    now hit the rebuild databse button.
    You should hear disk activity now.
    Hit the Show log  button once. the log files should open.
    you can refresh this screen by hitting the reload button in your browser.
    you should see messages fly by like
    21:14:40:317 LOG_SYSTEM:fsmon_add_watch inotify_add_watch (12, /share/media/photos/Moorea May 1998) returns 2045
    21:14:40:317 LOG_DB:watch wd=2045 added on /share/media/photos/Moorea May 1998
    21:14:40:379 LOG_DB:upnp_folder_watch_add_dir /share/media/photos/Moorea May 1998/New Folder
    thnis means it is probin the entire directory structure and adding files and paths.
    Let it run for a while. hitting refresh on your browser once in a while. it took a couple of hours on mine ( 76000+ pictures ... ) and a couple of hundred songs + some videos.
    once disc activity ceases : go back to twonky port 9000 , go in to maintenance and hit Clear Logs button.
    HitSave button
    Hit restart server. you will again get an error page. get rid of the rubbish behind the ip address.and key in :9000
    go to the clients and security page.
    Make sure Automatic discovery is still turned OFF ( if it is on you will see in the logbooks that it attempts several times a second to connecto to anything it can find. Since the detection process is flawed it bombs out. this may overload the poor cpu in the NMH... )
    Hit the reset list once more
    hit save
    Now turn automatic discovery on and hit Save.
    go back to maintenacne and hit restart server. again on the error page : erase the garbage after the ip address and go back to port 9000.
    if you now go back to client/sharing page you should see 2 possibly 3 device. one is your router , one is the pc you are working on, and the last one is the same ip address as you see in the browser. ( the ipaddress you are using to talk to the nmh )
    make sure all the checkboxes before these devices are checked. and hit the save button once more.
    at this point i unchecekd the automatic discovery and hit save once more. i go back to maintenance and hit restart server for the last time.
    At this point the nmh restarted twonky and immediately there was a ton of disc activity. i opened the normal nmh user interface and lo and behold : the green spinning arrow started to move and progress was going forward. it increased 1% roughly ever 10 seconds or so. when it finally hit 100% everything was there. as it should be.
    Now. speculation on my part
    - this thing has trouble with long file names and non alphanumerical characters.
    - this thing has trouble with device it incorrectly identiefies or cannot identify. this screws up wonkymedia.
    - the communication between the process on the NMH (that serves the flash user interface running on your browser ) and winkymedia is going through a network port itself. There are problems. thye do not do inter process communication but go via network messages... ( this is kind of dumb as it loads the network... )
    proof :
    21:01:03:625 filescanner thread started
    21:01:03:628 LOG_SSDPSDP_notify_packet ### SSDP sending:
    NOTIFY * HTTP/1.1
    HOST: 239.255.255.250:1900
    CACHE-CONTROL: max-age=99999
    LOCATION: http://192.168.1.66:9000/DeviceDescription.xml
    NT: urn:schemas-upnp-org:service:ContentDirectory:1
    NTS: ssdp:alive
    SERVER: Linux/2.x.x, UPnP/1.0, pvConnect UPnP SDK/1.0
    the filescanner sends messages through network port 1900 to the NMH. Since the filescanner is running on the NMH ... i also have no clue what 239:255:255:250 at 190 is ...
    i also see sometimes the following messages fly by
    21:01:03:894 [Error] - LOG_HTTP:HTTP_get_header Cannot receive header, clientSocket=13, nBytesReceived=0
    21:01:03:894 LOG_HTTP:HTTP_send_receive received no header in HTTP_send_receive, propably client closed socket, URL=http://192.168.1.66:9000/
    my suspicion is that , since the various processes running on the NMH ( the indexer , the UI server and all the twonky processes) all intercommunicate through the network port , this is a problem. if the network settings get corrupted ( because false identification , notwork overload or whatever ) the thing jams up.
    by cleaning out all the false identieis , letting it identify itself ( important for its own process communication ) and a pc , and then turning off the detection .this solves that problem.
    limiting the filesstem to 'clean 32 char max' names solves another problem.
    I also see a lot of keep-alive message fly by on the netowrk ( several a second. )
    i eventually plugged in my other computers and the roku , let it autodetect for a while and turned this feature back off. Then i plugged in all other devices the NMH has no business with.
    So far it still works fine.
    I still do see a ton of messages fly by where the NMH is probing the network on existing devices. tey come back with 'device already validated
    21:01:04:140 LOG_CLIENT_DB:The entry found in known clients list is already validated (ip=192.168.1.76)
    21:01:04:140 LOG_CLIENT_DB:checking http header for entry ip=192.168.1.76, mac=
    21:01:04:140 LOG_CLIENT_DB:Checking http header to find a matching client.db entry (ip=192.168.1.76)
    21:01:04:141 LOG_CLIENT_DB:Ignoring client with fixed flag = TRUE (ip=192.168.1.76)
    ( .76 is the pc i am working on right now. )
    i don't know why they keep probing. i am not streaming anything and auto detect is turned off ..
    anyway i will keep you guys posted on how this evolves.
    One thing is for sure. this is another half baked ' broken source' based system.

    further observations :
    22:24:31:868 LOG_CLIENT_DB:checking http header for entry ip=192.168.1.66, mac=
    22:24:31:868 LOG_CLIENT_DB:Checking http header to find a matching client.db entry (ip=192.168.1.66)
    22:24:31:869 LOG_CLIENT_DB:HHetting client adaptation to 49 (ip=192.168.1.66)
    22:24:36:878 LOG_CLIENT_DB:The entry found in known clients list is already validated (ip=192.168.1.66)
    22:24:36:878 LOG_CLIENT_DB:checking http header for entry ip=192.168.1.66, mac=
    22:24:36:878 LOG_CLIENT_DB:Checking http header to find a matching client.db entry (ip=192.168.1.66)
    22:24:36:879 LOG_CLIENT_DB:HHetting client adaptation to 49 (ip=192.168.1.66)
    22:24:37:926 LOG_CLIENT_DB:The entry found in known clients list is already validated (ip=192.168.1.76)
    22:24:37:926 LOG_CLIENT_DB:checking http header for entry ip=192.168.1.76, mac=
    22:24:37:926 LOG_CLIENT_DB:Checking http header to find a matching client.db entry (ip=192.168.1.76)
    22:24:37:927 LOG_CLIENT_DB:Ignoring client with fixed flag = TRUE (ip=192.168.1.76)
    22:2
    thisnthing keeps on probing itself ... i wonder why. (.66 is the NMH  .76 is my pc ... ) it is also strange it cannot retrieve its own mac address....
    Oh , you can turn off the logging features again when done. it only takes time and diskspace on the NMH
    And before i get flamed : the comments i make about 'broken source' : i have no gripe with linux. I have a problem with companies that grab a bunch of stuff that is free ,slap it together, sell it for a lot of money and give no support to the people that bought it. They want all the money for no effort .. they turn open source into broken source ...

  • Populate Values for Drop Down by Index in Table in Web Dynpro Java

    Hi Experts,
    I have a table and having a column table cell editor as Drop Down by Index.
    I have created the table node (tbnode) and child node for DDBI (ddbinode) and set the singleton property for DDBI node to false.
    I have same local variable node as same as above node and the values are available.
    I have one button ADD.On click the add button i need to populate the values to table node and as well as DDBI Node.
    I created supply function for DDBI node and populate the values for DDBI Node.
    Add Method:
    IPrivateMdTest8CompView.ItbnodeElement tbnode = wdContext.nodetbnode().createtbnodeElement();
    tbnode.setDescription(wdContext.currentCn_LocalVariableElement().getDescription);
    wdContext.nodetbnode().addElement(tbnode);
    Supply Function Method:
    for(int j=0;j<wdContext.nodetbnode().size();j++)
    wdContext.nodeddbinode().setLeadSelection(j);
    IPrivateMdTest8CompView.IddbinodeElement ddbinode = wdContext.nodeddbinode().createddbinodeElement();
    ddbinode.setddvalue(wdContext.currentCn_localddvalueElement().getddvalue);
    wdContext.nodeddbinode().addElement(ddbinode);
    Problem is one i got the values in the drop down and i click the second row in table again the supply function calls and reset the first row drop down to original value.
    If you any problem like please provide the solution.
    Thanks & Regards,
    SatheshKumar R

    If you created the supply method by setting the supply property of the node, you should have variable 'node' available as argument of the supply method, which will be related to the table row of the triggered dropdown. Opening the dropdown does not change the lead selection of the parent node, so
    ddbinode = wdContext.nodeddbinode().createddbinodeElement();
    wdContext.nodeddbinode().addElement(ddbinode);
    does always relate to the first row of the table (given that leadSelection == 0). With the node variable you can
    IddbinodeElement ddbiElement = node.createddbinodeElement();
    node.addElement(ddbiElement);

  • Advanced Search in Reader 9 for packages w/ embedded indexes created in Acrobat 8.1

    I am having a terrible day.  I've just finished combining thousands of documents into 6 packages w/ embedded indexes using Acrobat 8.1.  The advanced search feature allowed me to enter several peoples names "Wilkes Terry White Smith Johnson, and I could select to "Match any of the terms" and I'd be able to see any documents that mentioned any of those names.  The problem is that my boss only has Reader 9 installed and his advanced search screen looks entirely different for the nice advanced search screen I can see.  I can't figure out how to get all the advanced search features for him to use.  If he enters several names "Wilkes Terry White Smith", he gets no hits because the system is assuming I'm just looking for someone named "Wilkes Terry White Smith."  Is there any way to have the search look for "Match any of the terms" as opposed to "Match the exact phrase" using Reader 9.  I'm in deep trouble tomorrow.

    Is there any way to have the search look for "Match any of the terms" as opposed to "Match the exact phrase" using Reader 9.
    Yes; if the embedded index is still present and "Search" rather than "Find" is used.
    Edit > Search rather than Edit > Find.
    Search requires the embedded index or a Catalog index.
    If a PDF having an embedded index undergoes a "Save As" the embedded index is removed and the PDF saved to support Fast Web View.
    Be well...

  • Indices configuration for XML document analysis (indexing time problems)

    Hi all,
    I'm currently developing a tool for XML Document analysis using XQuery. We have a need to analyse the content of a large CMS dump, so I am adding all documents to a berkeley DB xml to be able to run xqueries against it.
    In my last run I've been running to indexing speed problems, with single documents (typically 10-20 K in size) taking around 20 sec to be added to the database after 6000 documents (I've got around 20000 in total). The time needed for adding docs to the database drops with the number of documents.
    I suspect my index configuration to be the reason for this performance drop. Indeed, I've been very generous with indexes, as we have to analyse the data and don't know the structure in advance.
    Currently my index configuration includes:
    - 2 default indicess: edge-element-presence-none and edge-attribute-presence-none to be able to speed up every possible xquery to analyse data patterns: ex. collection()//table//p[contains(.,'help')]
    - 8 edge-attribute-substring-string indices on attributes we use often (id, value, name, ...)
    - 1 edge-element-substring-string index on the root element of the xml documents to be able to speed up document searches: ex. collection()//page[contains(.,'help')]
    So here my questions:
    - Are there any possible performance optimisations in Database config (not index config)? I only set the following:
    setTransactional(false);
    envConf.setCacheSize(1024*64);
    envConf.setCacheMax(1024*256);
    - How can I test various index configuration on the fly? Are there any db tools that allow to set/remove indexes?
    - Is my index config suspect? ;-)
    Greetings,
    Nils

    Hi Nils,
    The edge-element-substring-string index on the document element is almost certainly the cause of the slow document inserts - that's really not a good idea. Substring indexes are used to optimize "=", contains(), starts-with() and ends-with() when they are applied to the named element that has the substring index, so I don't think that index will do what you want it to.
    John

  • How to get SQL script for generating table, constraint, indexes?

    I'd like to get from somewhere Oracle tool for generating simple SQL script for generating table, indexes, constraint (like Toad) and it has to be Oracle tool but not Designer.
    Can someone give me some edvice?
    Thanks!
    m.

    I'd like to get from somewhere Oracle tool for
    generating simple SQL script for generating table,
    indexes, constraint (like Toad) and it has to be
    Oracle tool but not Designer.
    SQL Developer is similar to Toad and is an Oracle tool.
    http://www.oracle.com/technology/products/database/sql_developer/index.html

  • Syntax for existing function-based index

    Hi:
    I am on 10.2.0.3.
    Listed below is the list of indexes and index columns on one of the tables. Aparantly one of the columns (SYS_NC00220$ ) is in reality a function-based index.
    Anybody knows how to get SQL syntax for this index? TIA.
    INDEX_NAME UNIQUENES COLUMN_NAME COLUMN_POSITION
    PS0BI_HDR NONUNIQUE BILL_TO_CUST_ID 1
    PS0BI_HDR NONUNIQUE BUSINESS_UNIT 2
    PS0BI_HDR NONUNIQUE SYS_NC00220$ 3
    PS1BI_HDR NONUNIQUE BILL_STATUS 1
    PS1BI_HDR NONUNIQUE BUSINESS_UNIT 2
    PS1BI_HDR NONUNIQUE SYS_NC00220$ 3
    PS2BI_HDR NONUNIQUE CONTRACT_NUM 1
    PS2BI_HDR NONUNIQUE BUSINESS_UNIT 2
    PS2BI_HDR NONUNIQUE SYS_NC00220$ 3
    PSABI_HDR NONUNIQUE INVOICE 1
    PSABI_HDR NONUNIQUE BILL_TO_CUST_ID 2
    PSABI_HDR NONUNIQUE BUSINESS_UNIT 3
    PSABI_HDR NONUNIQUE BILL_STATUS 4
    PSBBI_HDR UNIQUE PROCESS_INSTANCE 1
    PSBBI_HDR UNIQUE BUSINESS_UNIT 2
    PSBBI_HDR UNIQUE INVOICE 3
    PS_BI_HDR UNIQUE BUSINESS_UNIT 1
    PS_BI_HDR UNIQUE SYS_NC00220$ 2

    query user_ind_expressions and look for COLUMN_EXPRESSION.
    this will give you expression.

  • Compression for oracle database and index compression during import of data

    Hi All,
    I have a query , in order to import into oracle database and also have compression and index compression , do we have some kind of load args for r3load and also do we have to change the tpl file ?

    Hello guy,
    I did this kind of compression within migration project before.
    I performed index compress first and then export -> import with table compress.
    One thing you should take care, delete nocompress flag from TARGET.SQL (created by program SMIGR_CREATE_DDL, program SMIGR_CREATE_DDL created pure non-compression objects for these considered non-standard tables). For table columns > 255, we should not delete this flag.
    Regarding to the index compress in source system, please check the following notes:
    Note 1464156 - Support for index compression in BRSPACE 7.20
    Note 1109743 - Use of Index Key Compression for Oracle Databases
    Note 682926 - Composite SAP note: Problems with "create/rebuild index"
    Best Regards,
    Ning Tong

  • Steps for creating a database index

    Do we just create it from SE11? Does Basis needs to be involved for any furthur steps?

    Hi Amrutha,
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan). The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE. If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set. You specify the fields of secondary indexes using the Abap Dictionary. You can also determine whether the index is unique or not. However, you should not create secondary indexes to cover all possible combinations of fields. Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table. If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents. Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column's selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table. If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
    Index:
    http://help.sap.com/saphelp_nw04/helpdata/en/cf/21eb20446011d189700000e8322d00/content.htm
    Creating Secondary Index
    http://help.sap.com/saphelp_nw04/helpdata/en/cf/21eb47446011d189700000e8322d00/content.htm
    regards,
    keerthi.

  • List Views displaying error - when using Lookup column for filtering ( which are indexed )

    I'm trying to filter a large list that has more items than the list view threshold. The list has a lookup column that is indexed. If I  try to filter by other columns, the view works fine as long as it returns fewer than 5,000 items. However, if
    I try to filter by the lookup column, I get the error about not being able to display the items because the list is too large.
    The views are returning items which are less than threshold value , but still I am seeing the same issue.I Created some pages and added this filtered views to the pages it is showing below error
    This view cannot be displayed because it exceeds the list view threshold ....
    IF I navigate the view in the list it's returning zero records with out any error page . But no records. 
    Is there a known issue with filtering large lists by lookup column? Any other troubleshooting suggestions? Everything I'm doing here is through the web interface.

    Thank you for your reply. Yeah it’s default limit. But when I created view based on some filter condition on lookup column it’s not working.
    For example:
    Created a View based on Lookup column DEPT using filter condition Dept = HR , returned 1400 items
    Created another View based on Lookup column DEPT using filter condition Dept=Admin , returned 3600 items
    Adding this list view webparts to the page also working fine , displaying the results .
    After adding one item to this list , both the views are not displaying any results. Now the count is 5001.(it exceeded the list view threshold)
    The page where the list view web parts are added throwing the below error.
    This view cannot be displayed because it exceeds the list view threshold (5000 items) enforced by the administrator.
    To view items, try selecting another view or creating a new view. If you do not have sufficient permissions to create views for this list, ask your administrator to modify
    the view so that it conforms to the list view threshold.
    Learn about
    creating views for large lists.
    I am experiencing this problem only with lookup columns, Choice and site columns are working fine in this same situation .
    Thanks,

  • Search a waveform for the time or index.

    I would like to search a waveform for a particular percentage y value on the waveform and return either the index from which I can genereate the time by mutliplying dt by the index or obtain the time value directly. I tried the search Waveform.vi, but this appears to affect the display of the original waveform. The closest I can get is with the Threshold 1D array.vi.
    All the other utilities produce an index of zero or affect the display of the original waveform.
    The subvi is attached.
    Attachments:
    Percentage Change Cursor.vi ‏52 KB

    New version of subvi with default values for NI Application Engineer.
    Attachments:
    Percentage Change Cursor.vi ‏446 KB

Maybe you are looking for