Column that have index

I need to know what columns that have "ID%" and are NOT NULL from my user "TESTE" dont have a index. I used something like:
select owner, table_name, index_name from dba_indexes where owner='TESTE' and table_name in (
select table_name from ALL_TAB_COLUMNS where owner='TESTE' and COLUMN_NAME like 'ID%' and NULLABLE like 'N')
But this give what columns have, i need the opposite.
Tks,
Elber.
Message was edited by:
Elber

Although Sundar's solution may work in your particular case, it may be innacurate if more than one table could have a column with the same name. Fortunately, the fix for that also give you the table name "for free"
SELECT table_name, column_name
FROM all_tab_columns
WHERE owner='TESTE' and
      column_name LIKE 'ID%' and
      nullable LIKE 'N'
MINUS
SELECT table_name, column_name
FROM all_ind_columns
WHERE index_owner='TESTE' and
      column_name LIKE 'ID%'You could also just add table_name to the select list of Kedruwsky's solution.
One thing you may want to keep in mind is that both Sundar's and Kedruwsky's solutions, and my modification of Sundar's only check to see if the column is present in an index. It could be the fifth column in a seven column index.
Depending on what exactly you are looking for, you may want to consider the column_position field in all_ind_columns. If you modify the parts of the queries against all_indexes to include a predicate like:
column_position = 1it will eliminate only those columns that are on the leading edge of the index. Based on the ID% name, I suspect that you might be looking for primary keys. If that is true, you might want to replace the queries against all_ind_columns with a query against a join of all_constraints and all_cons_columns. Something like:
SELECT table_name, column_name
FROM all_tab_columns
WHERE owner='TESTE' and
      column_name LIKE 'ID%' and
      nullable LIKE 'N'
MINUS
SELECT table_name, column_name
FROM all_constraints c, all_cons_columns cc
WHERE c.constraint_name = cc.constraint_name and
      c.constraint_type = 'P' and
      c.owner='TESTE' and
      cc.column_name LIKE 'ID%' and
      cc.position = 1HTH
John

Similar Messages

  • Collect only columns that have values

    I have one Collection called Collection1 with the following columns:
    First
    Last
    Company
    I'd like to collect that information into another collection, but only the columns that have data in them.   I can do it using multiple If statements for each column If(IsEmpty()), but this can create a LOT of If statements if I'm using a lot of
    columns, rather than just the 3 in the example.
    Is there a way to evaluate each column, and only collect it if it isn't empty without having to type out each column name in an If statement?
    -Bruton

    Hello Oliver,
    Thanks for your help.  In that example, I would like:
    First          Last           Company
    John                            Contoso
                     Smith         Contoso2
    Brad          Anderson
    What I'm looking for how to do is if I have:
    First          Last          Company
    John          Smith         <empty>
    Bruton       Gaster        <empty>
    Barry         Manalo        <empty>
    I want to collect First and Last, but if I have:
    First          Last           Company
    Scott          <empty>   Acumen
    Bill             <empty>   Microsoft
    Fred           <empty>   Dell
    I want to collect First and Company.  The Column names may be based off variables, so I'd rather not have to throw a bunch of If statements at it, but if I have to, I have to.
    Thanks!
    -Bruton

  • My observations ( for people that have indexing and connection issues )

    Since cisco takes an awful lot of time doing anything about these problems i decided to investigate the box myself today.
    I followed advice from various threads and did some digging aroudn with network traffic snoopers and some other tools.
    First ( and this is buried somewhere deep in the documentation ) you can NOT have ANY file or folder name that is longer than 32 characters. This is a limitation of the stupid linux they are running on ! If you have a file with longer name or foldername it may screw up the indexing.
    Second : file and path names can only have alphanumerical characters and numbers and spaces in them. DO NOT USE ANY other character. it messes up the indexing  I had folders named -= folder 1 =-. they never got indexed. as soon as i removed teh  -= and =- the indexing kicked in .... This is again a twonkyvision / loonix problem
    The box has tremendous problems with ghost devices on the network. I have on my network at home : 4 pc's running XP ( some pro some home edition )  2 laptops with Vista. One box with Win7. One Windows home Server. One linksys Skype phone , One Roku soundbridge, one Dlink DNS323 , one ADS network drive , one Simpletech Simplestor , A HP color laserjet , a Hp 6250 , 7650 and some other HP network printer. Thru the wireless link my PDA ( iPaq) and iPhone connect once in a while too. My blu ray player is also hooked up (netflix streaming). And then there are various experimental systems ( i am an electronics engineer and i have built some gadgets that are network connected that allow me to remotely turn on lights monitor temperature etc. )
    Now , the NMH does not detect correctly most of the devices. It keeps on trying to feed information to the printers ... it also tries feeding information to other NAS devices as well as to the Windows Homeserver ... It falsely identifies one of the printers as a dlink ethernet connected dvd player ...
    It also has problems with devices that use static ip addresses on the network ( i set up the printers and other NAS devices with hardcoded ip addresses. so no dhcp )
    So here is what i did : go to twonky configuration ( port 9000 see tony's article )
    Step 1 : Yank out the ethernet cable to ANY OTHER DEVICE except the pc you are working on , your router and the NMH <- this is important
    step 2 : Hit the button to erase all the devices it discovered.  (reset list)
    Step 3 : hit the SAVE button
    step 4: UNCHECK the box next to the 'reset list' button
    step 5: hit the SAVE button
    Now , on the left hand side click on Maintenance
    Click on ALL checkboxes under Log level. they should ALL be checked
    Hit the Clear Logs button
    Hit Save changes
    Hit RESTART server. you wil get a file not foound error page. you will see in your browsers titlebar that the url changes to an ip address with some text behind it. remove all that text and key in :9000 and hit return. it will take you back to tonky. ( i don't know how important this step is , but i did not go in throught the device name , i used the ip address from this point on. normally it should not matter but you never know ( i have a suspicion i will explain later )
    Write down this IP address. it is usefull to know.
    Now since we are back in wonkyvision ( stupid half baked program ) Go back to the maintenance screen.
    now hit the rebuild databse button.
    You should hear disk activity now.
    Hit the Show log  button once. the log files should open.
    you can refresh this screen by hitting the reload button in your browser.
    you should see messages fly by like
    21:14:40:317 LOG_SYSTEM:fsmon_add_watch inotify_add_watch (12, /share/media/photos/Moorea May 1998) returns 2045
    21:14:40:317 LOG_DB:watch wd=2045 added on /share/media/photos/Moorea May 1998
    21:14:40:379 LOG_DB:upnp_folder_watch_add_dir /share/media/photos/Moorea May 1998/New Folder
    thnis means it is probin the entire directory structure and adding files and paths.
    Let it run for a while. hitting refresh on your browser once in a while. it took a couple of hours on mine ( 76000+ pictures ... ) and a couple of hundred songs + some videos.
    once disc activity ceases : go back to twonky port 9000 , go in to maintenance and hit Clear Logs button.
    HitSave button
    Hit restart server. you will again get an error page. get rid of the rubbish behind the ip address.and key in :9000
    go to the clients and security page.
    Make sure Automatic discovery is still turned OFF ( if it is on you will see in the logbooks that it attempts several times a second to connecto to anything it can find. Since the detection process is flawed it bombs out. this may overload the poor cpu in the NMH... )
    Hit the reset list once more
    hit save
    Now turn automatic discovery on and hit Save.
    go back to maintenacne and hit restart server. again on the error page : erase the garbage after the ip address and go back to port 9000.
    if you now go back to client/sharing page you should see 2 possibly 3 device. one is your router , one is the pc you are working on, and the last one is the same ip address as you see in the browser. ( the ipaddress you are using to talk to the nmh )
    make sure all the checkboxes before these devices are checked. and hit the save button once more.
    at this point i unchecekd the automatic discovery and hit save once more. i go back to maintenance and hit restart server for the last time.
    At this point the nmh restarted twonky and immediately there was a ton of disc activity. i opened the normal nmh user interface and lo and behold : the green spinning arrow started to move and progress was going forward. it increased 1% roughly ever 10 seconds or so. when it finally hit 100% everything was there. as it should be.
    Now. speculation on my part
    - this thing has trouble with long file names and non alphanumerical characters.
    - this thing has trouble with device it incorrectly identiefies or cannot identify. this screws up wonkymedia.
    - the communication between the process on the NMH (that serves the flash user interface running on your browser ) and winkymedia is going through a network port itself. There are problems. thye do not do inter process communication but go via network messages... ( this is kind of dumb as it loads the network... )
    proof :
    21:01:03:625 filescanner thread started
    21:01:03:628 LOG_SSDPSDP_notify_packet ### SSDP sending:
    NOTIFY * HTTP/1.1
    HOST: 239.255.255.250:1900
    CACHE-CONTROL: max-age=99999
    LOCATION: http://192.168.1.66:9000/DeviceDescription.xml
    NT: urn:schemas-upnp-org:service:ContentDirectory:1
    NTS: ssdp:alive
    SERVER: Linux/2.x.x, UPnP/1.0, pvConnect UPnP SDK/1.0
    the filescanner sends messages through network port 1900 to the NMH. Since the filescanner is running on the NMH ... i also have no clue what 239:255:255:250 at 190 is ...
    i also see sometimes the following messages fly by
    21:01:03:894 [Error] - LOG_HTTP:HTTP_get_header Cannot receive header, clientSocket=13, nBytesReceived=0
    21:01:03:894 LOG_HTTP:HTTP_send_receive received no header in HTTP_send_receive, propably client closed socket, URL=http://192.168.1.66:9000/
    my suspicion is that , since the various processes running on the NMH ( the indexer , the UI server and all the twonky processes) all intercommunicate through the network port , this is a problem. if the network settings get corrupted ( because false identification , notwork overload or whatever ) the thing jams up.
    by cleaning out all the false identieis , letting it identify itself ( important for its own process communication ) and a pc , and then turning off the detection .this solves that problem.
    limiting the filesstem to 'clean 32 char max' names solves another problem.
    I also see a lot of keep-alive message fly by on the netowrk ( several a second. )
    i eventually plugged in my other computers and the roku , let it autodetect for a while and turned this feature back off. Then i plugged in all other devices the NMH has no business with.
    So far it still works fine.
    I still do see a ton of messages fly by where the NMH is probing the network on existing devices. tey come back with 'device already validated
    21:01:04:140 LOG_CLIENT_DB:The entry found in known clients list is already validated (ip=192.168.1.76)
    21:01:04:140 LOG_CLIENT_DB:checking http header for entry ip=192.168.1.76, mac=
    21:01:04:140 LOG_CLIENT_DB:Checking http header to find a matching client.db entry (ip=192.168.1.76)
    21:01:04:141 LOG_CLIENT_DB:Ignoring client with fixed flag = TRUE (ip=192.168.1.76)
    ( .76 is the pc i am working on right now. )
    i don't know why they keep probing. i am not streaming anything and auto detect is turned off ..
    anyway i will keep you guys posted on how this evolves.
    One thing is for sure. this is another half baked ' broken source' based system.

    further observations :
    22:24:31:868 LOG_CLIENT_DB:checking http header for entry ip=192.168.1.66, mac=
    22:24:31:868 LOG_CLIENT_DB:Checking http header to find a matching client.db entry (ip=192.168.1.66)
    22:24:31:869 LOG_CLIENT_DB:HHetting client adaptation to 49 (ip=192.168.1.66)
    22:24:36:878 LOG_CLIENT_DB:The entry found in known clients list is already validated (ip=192.168.1.66)
    22:24:36:878 LOG_CLIENT_DB:checking http header for entry ip=192.168.1.66, mac=
    22:24:36:878 LOG_CLIENT_DB:Checking http header to find a matching client.db entry (ip=192.168.1.66)
    22:24:36:879 LOG_CLIENT_DB:HHetting client adaptation to 49 (ip=192.168.1.66)
    22:24:37:926 LOG_CLIENT_DB:The entry found in known clients list is already validated (ip=192.168.1.76)
    22:24:37:926 LOG_CLIENT_DB:checking http header for entry ip=192.168.1.76, mac=
    22:24:37:926 LOG_CLIENT_DB:Checking http header to find a matching client.db entry (ip=192.168.1.76)
    22:24:37:927 LOG_CLIENT_DB:Ignoring client with fixed flag = TRUE (ip=192.168.1.76)
    22:2
    thisnthing keeps on probing itself ... i wonder why. (.66 is the NMH  .76 is my pc ... ) it is also strange it cannot retrieve its own mac address....
    Oh , you can turn off the logging features again when done. it only takes time and diskspace on the NMH
    And before i get flamed : the comments i make about 'broken source' : i have no gripe with linux. I have a problem with companies that grab a bunch of stuff that is free ,slap it together, sell it for a lot of money and give no support to the people that bought it. They want all the money for no effort .. they turn open source into broken source ...

  • Counting Numnber of Cells in a Table that Have the Same Value

    Is there a way/formula to do this, other than they way I'm doing it now, which is by using COUNTIF? In a table with 1000 rows for instance, let's say the values in the A column are what I'm keying off. I would like a count of all the cells in the A column that have the word "example" as their value. Right now for each unique value in a cell in the A column I am manually creating a COUNTIF(A1:A1000,"<value i'm matching against>) but if there are many unique values in the A column, it's quite laborious. Ideally I'd like to just have a table generated that gives me the top 5 or top 10 most occurring cell values in the A column in the table. What's the best way to do this?

    So for instance, for a column like this:
    1
    1
    1
    2
    2
    4
    5
    I want a way to get back the number of times 1 appears in the list (3), the number of times 2 appears in the list (2), 4 (1), 5 (1), and so on. If I do a COUNTIF and there are thousands of rows, I have to manually put the matching string in each one.

  • Selecting columns which have data in it

    I have tables with more than 300 columns and most of the columns does not contain any data. Could anyone suggest a sql statement to select only those columns which has data atleast for one of the rows ?
    Thanks,
    Sachin
    null

    The following script combines some of the ideas from the responses above, so that all you should need to do is run this one script to select the data from only those columns that have data in at least one row, skipping the columns that don't have data in any of the rows.
    The script will prompt you for the name of the table for which you want to select the columns. Then it will analyze that table, which will update the num_nulls column of the all_tab_columns table regarding your table. Then it will use that information to select the proper columns which will be spooled to another file called list_not_nulls2.sql.
    The file list_not_nulls2.sql that this script creates will contain the select statement that will include only the not null columns that you are looking for. The script will then automatically run that script containing the select statement for you.
    Please let me know if you have any difficulties using or understanding it.
    Barbara
    SET ECHO OFF
    SET FEEDBACK OFF
    SET HEADING OFF
    SET PAGES 0
    SET TIMING OFF
    SET VERIFY OFF
    ACCEPT name_of_table PROMPT 'Enter name of table: '
    SET TERMOUT OFF
    ANALYZE TABLE &name_of_table COMPUTE STATISTICS
    SPOOL list_not_nulls2.sql
    SELECT 'SELECT '
    FROM DUAL
    SELECT COLUMN_NAME &#0124; &#0124; ','
    FROM ALL_TAB_COLUMNS
    WHERE TABLE_NAME = UPPER ('&name_of_table')
    AND NUM_NULLS <> (SELECT COUNT (1) FROM &name_of_table)
    AND COLUMN_ID < (SELECT MAX (COLUMN_ID)
    FROM ALL_TAB_COLUMNS
    WHERE TABLE_NAME = UPPER ('&name_of_table')
    AND NUM_NULLS <> (SELECT COUNT (1) FROM &name_of_table))
    SELECT COLUMN_NAME
    FROM ALL_TAB_COLUMNS
    WHERE TABLE_NAME = UPPER ('&name_of_table')
    AND NUM_NULLS <> (SELECT COUNT (1) FROM &name_of_table)
    AND COLUMN_ID = (SELECT MAX (COLUMN_ID)
    FROM ALL_TAB_COLUMNS
    WHERE TABLE_NAME = UPPER ('&name_of_table')
    AND NUM_NULLS <> (SELECT COUNT (1) FROM &name_of_table))
    SELECT 'FROM ' &#0124; &#0124; '&name_of_table' &#0124; &#0124; ';' FROM DUAL
    SPOOL OFF
    SET ECHO ON
    SET FEEDBACK ON
    SET TERMOUT ON
    SET VERIFY ON
    START list_not_nulls2
    null

  • NPS - sql log - can I remove columns that are always NULL?

    I am trying to clean up this logging stuff a bit and NPS logs can get quite large, so I was wondering if you can remove un-needed columns that have useless data (for me..) or simply NULL values without messing up the logging process?
    Thanks,
    Dan

    Hi,
    I don’t found some document you can safe delete the NULL column, by default, NPS does not log any data until you configure it to do so. Initially, it is recommended that you
    enable the logging of accounting and user authentication requests. You can refine your logging settings after you determine your required data.
    To limit the size of each log file, click When log file reaches this size, and then type a file size, after which a new log is created. The default size is 10 megabytes (MB).
    The related KB:
    The Cable Guy: The New and Improved Network Policy Server
    http://technet.microsoft.com/en-us/magazine/ff943567.aspx
    Configure NPS Log File Properties
    http://technet.microsoft.com/en-us/library/ee663944(v=ws.10).aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Is there a routine one can use to shift the column of data by one each time the loop index increments? In other words, increment the columns that the data is being saved by using the index?

    The device, an Ocean Optics spectrometer in columns of about 9000 cells.I'm saving this as a lvm file using the "write to measurement file.vi". But it doesn't give me the flexibility as far as I can tell.
    I need to move the column by the index of the for loop, so that when i = n, the data will take up the n+1 column. (the 1st column is used for wavelength). How do I use the "write to spreadsheet file.vi" to do this? Also, if I use the "write to spreadsheet file.vi", is there a way one can increment the file name, so that the data isn't written over. I like what "write to measurement file.vi" does.
    I'd really appreciate any help someone can give me. I'm a novice at this, so the greater the detail, the better. Thanks!!!

    You cannot write one column at a time to a spreadsheet file, because a file is arranged linearly and adding a column would need to move (=read and rewwrite elsewhere) almost all existing elements to interlace the new data. You can only append new rows without having to touch the already written data.
    Fields typically don't have fixed width. An exception would be binary files that are pre-allocated at the final size. In this case you can write columns by setting the file positions for each element. It still will be very inefficient.
    What you could do is append rows until all data is written, the read, transpose, and write back the final file.
    What you also could to is build the final array in a shift register and write the entire things to file at once after all data is present.
    LabVIEW Champion . Do more with less code and in less time .

  • Query on virtual column that is defined in XMLIndex does not use the index

    Hello,
    I am facing an issue in executing queries on a virtual column that is defined in an XMLIndex: it appears as if the index is not used.
    Database details:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for 64-bit Windows: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    For this use case the XML documents adhere to the following XSD and are stored in an XMLType column in a table:
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
        xmlns="http://a_name_space/v1"
        targetNamespace="http://a_name_space/v1"
        elementFormDefault="qualified" attributeFormDefault="unqualified" version="1.0">
        <xsd:element name="fields">
            <xsd:complexType>
                <xsd:sequence>
                    <xsd:element name="field" maxOccurs="unbounded">
                        <xsd:complexType>
                            <xsd:choice>
                                <xsd:element name="value" minOccurs="1" maxOccurs="1">
                                    <xsd:complexType>
                                        <xsd:simpleContent>
                                            <xsd:extension base="notEmptyString4000Type"/>
                                        </xsd:simpleContent>
                                    </xsd:complexType>
                                </xsd:element>
                                <xsd:element name="values" minOccurs="1" maxOccurs="1">
                                    <xsd:complexType>
                                        <xsd:sequence>
                                            <xsd:element name="value" minOccurs="1" maxOccurs="1">
                                                <xsd:complexType>
                                                    <xsd:simpleContent>
                                                        <xsd:extension base="notEmptyString4000Type">
                                                            <xsd:attribute name="startDate" type="xsd:date" use="required"/>
                                                            <xsd:attribute name="endDate" type="xsd:date" />
                                                        </xsd:extension>
                                                    </xsd:simpleContent>
                                                </xsd:complexType>
                                            </xsd:element>
                                        </xsd:sequence>
                                    </xsd:complexType>
                                </xsd:element>
                            </xsd:choice>
                            <xsd:attribute name="name" type="string30Type" use="required"/>
                            <xsd:attribute name="type" type="dataType" use="required"/>
                        </xsd:complexType>
                    </xsd:element>
                </xsd:sequence>
            </xsd:complexType>
        </xsd:element>
        <xsd:simpleType name="dataType">
            <xsd:annotation>
                <xsd:documentation>Char, Date, Number</xsd:documentation>
            </xsd:annotation>
            <xsd:restriction base="xsd:string">
                <xsd:enumeration value="C"/>
                <xsd:enumeration value="D"/>
                <xsd:enumeration value="N"/>
            </xsd:restriction>
        </xsd:simpleType>
        <xsd:simpleType name="string30Type">
            <xsd:restriction base="xsd:string">
                <xsd:maxLength value="30"/>
            </xsd:restriction>
        </xsd:simpleType>
        <xsd:simpleType name="notEmptyString4000Type">
            <xsd:restriction base="xsd:string">
                <xsd:maxLength value="4000"/>
                <xsd:pattern value=".+"/>
            </xsd:restriction>
        </xsd:simpleType>
    </xsd:schema>A field can have a single value as well as multiple values.
    The XMLIndex is defined as follows:
    CREATE INDEX test_xmltype_idx ON test_xmltype (additional_fields) INDEXTYPE IS XDB.XMLIndex
    PARAMETERS
    XMLTable dt_fld_tab (TABLESPACE "TAB_SPACE" COMPRESS FOR OLTP) ''fields/field''
    COLUMNS
    name varchar2(30 char) PATH ''@name''
    ,dataType varchar2(1 char) PATH ''@type''
    ,val varchar2(4000 char) PATH ''value/text()''
    ,vals XMLType PATH ''values/value'' VIRTUAL
    XMLTable dt_fld_multi_value_tab (TABLESPACE "TAB_SPACE" COMPRESS FOR OLTP) ''value'' passing vals
    COLUMNS
    val varchar2(4000) PATH ''text()''
    ,startDate varchar2(30 char) PATH ''@startDate''
    ,endDate varchar2(30 char) PATH ''@endDate''
    ');The following b-tree indexes are defined:
    create index dt_field_name_idx on dt_fld_tab (name);
    create index dt_field_value_idx on dt_fld_tab (val);
    create index dt_field_values_idx on dt_fld_multi_value_tab (val);And stats are properly computed before the queries are executed:
    call dbms_stats.gather_table_stats(user, 'test_xmltype', estimate_percent => null);Queries for single values are cost efficient and fast. With 600K rows in the table these return with 0.002 seconds.
    Queries for multi-valued fields / elements are not though, these result in a full table scan.
    Sample XML snippet:
    <fields>
      <field name="multiVal" type="C">
        <values>
          <value startDate="2013-01-01" endDate="2013-01-01">100</value>
          <value startDate="2014-01-01">120</value>
        </values>
      </field>
    </fields>Examples of costly and slow queries:
    select id from test_xmltype
    where xmlexists('/fields/field/@name="multiVal"' passing additional_fields)
    and xmlexists('/fields/field/values/value[@startDate="2013-01-01"]' passing additional_fields)
    and xmlexists('/fields/field/values/value[text()="100"]' passing additional_fields)
    select id from test_xmltype
    where xmlexists('/fields/field/@name="multiVal"' passing additional_fields)
    and xmlexists('/fields/field/values/value[@startDate="2013-01-01" and .="100"]' passing additional_fields);Whereas the following query on the multi valued field is fast:
    select id from test_xmltype
    where xmlexists('/fields/field/@name="multiVal"' passing additional_fields)
    and xmlexists('/fields/field/values/value[@startDate="2013-01-01"]' passing additional_fields);For the XPath /fields/field/values/value[@startDate="2013-01-01"] the index is used.
    Suspected cause: XPath issue for the value of a multi valued field, e.g. /fields/field/values/value[text()="aValue"].
    Any hints are appreciated: what am I overlooking here?
    Thanks in advance,
    -Sjoerd
    Edited by: user615230 on May 27, 2013 7:46 AM

    Hello,
    This is using binary XML. The table creation script is:
    create table test_xmltype
    (id number(14,0) not null primary key
    ,member_code varchar2(30 char) not null
    ,period_code varchar2(30 char) not null
    ,amount number(12,2) not null
    ,additional_fields xmltype
    );The schema is not registered in the database. Is that required? It is primarily used to generate Java classes that will be used in order to construct the XML documents.
    And you are right: for our initial investigation the sample XML documents are generated with a PLSQL routine and do not contain namespaces. But for the single valued fields there are also no namespaces and the queries on these are executed with very satisfactory plans.
    Thanks for the swift reply.
    -Sjoerd

  • Can I create a view based on two tables that have the same column name?

    I have two tables A and B. Each table has 50+ columns.
    I want to create a view that includes all the columns in A and all the columns in B. I created a view with a select statement that says
    Select A.*, B.*
    From A, B
    where A.id = B.id
    It returns an error because in each table I have a column that keeps track if a record has been changed called Modified_By. That's where it chokes up on I figure. I would like to write the view without explicitly writing each column name from A and B as part of the select statement. The actual select statement works fine and only bombs when trying to turn the select statement into a view.

    You will have to type the full column list at least once. You can save a few keystrokes (i.e. alias. on every column) by providing the column names to the CREATE part instead of in the SELECT part. Something like:
    SQL> desc t
    Name                                      Null?    Type
    ID                                                 NUMBER
    NAME                                               VARCHAR2(10)
    SQL> desc t1
    Name                                      Null?    Type
    T_ID                                               NUMBER
    LOC_ID                                             NUMBER
    NAME                                               VARCHAR2(15)
    SQL> CREATE VIEW t_v (id, t_name, t_id, loc_id, t1_name) AS
      2  SELECT t.*, t1.*
      3  FROM t, t1
      4  WHERE t.id = t1.t_id;
    View created.HTH
    John

  • How to find out the table column that is required for index

    hi all,
    i want to the column required for index in one schema.
    what are the ways to achieve the same.

    To know what columns to index you must, not should, but must, know your data, know how it will be used, and know how your WHERE clause filters will affect how the data is accessed.
    Building indexes based on some rule is a waste of CPU, disk i/o, and space.
    To build indexes that enhance rather than degrade a system requires research and the use of explain plan reports generated with DBMS_XPLAN.
    http://www.morganslibrary.org/library.html

  • I created a Pages document inserting 2 columns using 1) Inspector 2) Layout 3) columns.  How do I decrease the height of the column.  Have tried to use cursor and drag down the top border, but that does not reset the top border.

    I created a Pages document inserting 2 columns using 1) Inspector 2) Layout 3) columns.  How do I decrease the height of the column.  Have tried to use the cursor and drag down the top border, but that does not reset/decrease the top border.

    Set your columns back to one for the moment. In layout mode, insert a Text box. Place it in the upper left corner of your document, and drag down and right to the size of the container for your two columns. Click inside the Text Box, and now bump up your columns to 2. Your two columns are now contained in this resizable Text Box.

  • JOIN 2 tables that have same column ?

    I need to learn how to join two tables that both have the same column name:
    tbl1 - idskey
    tbl2 - idskey
    the idskey column holds a id_number
    When I do the JOIN I would like to make sure that only Distinct records are joined from both tables and that any duplicates are removed in the final join. So if:
    Tbl1 has a idskey of: 12345
    and
    Tbl2 has a idskey of: 12345
    In the final JOIN I want to remove one of those duplicates.
    I actually need to join 3 tables that have the same linking column names for the join, but if I learn how to do this correctly on 2, that will be a start.
    10g for db, thanks!

    Hi,
    SELECT DISTINCT and GROUP BY are the most common ways to get unique results from non-unique keys. Exactly how you use them depends on exactly what you want to do.
    SELECT DISTINCT guarantees that no two rows in the result set, conisdering all columns, will be identical.
    GROUP BY produces one row from a set of rows that have a common feature. The values on that row may be a composite of values from various rows in that set (e.g., an average).
    Please post a small, specific example. For instance:
    "I have two rows in tbl1 ...
    and these fhtee rows in tbl2 ...
    Notice how there is one row with idskey=12345 in tbl1 but two such rows in tbl2.
    How can I get theses results ...
    where only one row has idskey=12345?"

  • HT2905 Most all of my 1700 songs have been duplicated in iTunes. I have downloaded the instructions how to delete the duplicates but it says "sort by the date you added" and i have no column that says that. I am running windows xp.

    Most all of my 1700 songs have been duplicated in iTunes. I have downloaded the instructions how to delete the duplicates but it says "sort by the date you added" and i have no column that says that. I am running windows xp.

    Apple's official advice is here... HT2905 - How to find and remove duplicate items in your iTunes library. It is a manual process and the article fails to explain some of the potential pitfalls.
    Use Shift > View > Show Exact Duplicate Items to display duplicates as this is normally a more useful selection. You need to manually select all but one of each group to remove. Sorting the list by Date Added may make it easier to select the appropriate tracks, however this works best when performed immediately after the dupes have been created.  If you have multiple entries in iTunes connected to the same file on the hard drive then don't send to the recycle bin.
    Use my DeDuper script if you're not sure, don't want to do it by hand, or want to preserve ratings, play counts and playlist membership. See this thread for background and please take note of the warning to backup your library before deduping.
    (If you don't see the menu bar press ALT to show it temporarily or CTRL+B to keep it displayed)
    tt2

  • Tally up a column only on rows that have been checked

    Hi I'm driving myself to drink with this, and I'm sure it should be pretty easy.
    I have a subtotal column (Column D) which I calculate the total of via (SUM(D2:D20), however I have a column G, that is a "Paid" column, that has a checkbox, now what I want to do is only tally up in column D the amounts that have been ticked/checked in column G.
    I thought SUMIF would be my solution but it's not working.
    I've tried the following among other things and nothing works.
    =SUMIF(D2:D20,TRUE)
    =SUMIF(G2:G20,TRUE,D2:D20,)
    Any help would be most appreciated.

    =SUMIF(G2:G20,TRUE,D2:D20)
    works perfectly.
    In your message there was an extraneous comma but I don't know if it was available in your sheet or if it was just a typo.
    If you are running Numbers in English in a country whose decimal separator is comma, you will have to edit the formula like that:
    =SUMIF(G2:G20;TRUE;D2:D20) (semi-colon replacing comma).
    Yvan KOENIG (from FRANCE samedi 19 juillet 2008 20:47:27)

  • How to verify columns that made up an index

    hello,
    the SQL statement:
    select index_name,index_type,table_name,table_owner from dba_indexes where table_name=&table_name ; gives you indexes built on the table name.
    But how can I find out ( sql statement) columns that made up an index?
    thanks

    The columns which make up an index can be queried from DBA_IND_COLUMNS.
    For example:
    select column_name from dba_ind_columns
    where index_name = '<your-index-name>'
    order by column_position;
    The Oracle Data Dictionary is HUGE. When I'm looking for something and can't immediately recall the exact name of the view I need, I query DICT and get Oracle to give me a list of the likely views I need:
    For example:
    select table_name from dict where table_name like '%COLUMN%';
    All the best,
    SF.

Maybe you are looking for

  • Voice memo volume

    I just got an iPhone 5S (had a 4S).  I use Voice Memos alot and am having trouble hearing my voice on the 5S, it doesn't playback very loudly.  There's nothing I wrong with my hearing, I have the volume turned up all the way using the volune Up butto

  • Duda sobre SSI.

    Hola. En mi hosting dispongo de SSI (Server Side Include) y funciona perfectamente. Pero, como hago para que mi primera p�gina sea .shtml? Normalmente tiene que ser index.htm pero si pongo index.shtml no visualiza nada al poner el dominio en el naveg

  • Profile Parameter Setup (RZ10) - Help Needed

    In using RZ10 to setup profile parameter for QAS, in the scenario below: How dow I change the "Unsubtituted and Subituted standard value to match this miadevs2\sapmnt\trans                                                                              

  • Append a txt or doc file to a Teststand Report

    How can I append an external doc or txt file ( that already exsists somewhere on my hard drive) to the report generated by Teststand 3.1 at the end of a "Test UUTs" seq run ?

  • Printing off-line

    My Mac is not connected to the internet. Is there any way to save the pdf and send it to Mac without aperture? Does anyone know any other companies that can use the pdf from aperture to print the book?