Timestamp question

I'm loading a csv file into a table.
The first column is a decimal, the second is a timestamp(6), other columns.
my csv:
1.0, 2007-12-12 12:23:45,0,0,0,0,0,0,0,0,0,0
I'm getting an error when it trys to load the timestamp
ORA-01843: not a valid month

I'm using the Load Data oracle tool.
I'm using the option to automatically generate control file
It doesn't say what the control file name is or where it's located
Here's the output log
SQL*Loader: Release 10.2.0.1.0 - Production on Fri Dec 14 13:42:45 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Control File: C:\ORADATA\TEST\test.CTL
Data File: C:\ORADATA\TEST\test.csv
Bad File: C:\ORADATA\TEST\test.bad
Discard File: none specified
(Allow all discards)
Number to load: ALL
Number to skip: 0
Errors allowed: 50
Bind array: 64 rows, maximum of 256000 bytes
Continuation: none specified
Path used: Conventional
Table table.LOCATION, loaded from every logical record.
Insert option in effect for this table: APPEND
Column Name Position Len Term Encl Datatype
ID FIRST * , O(") CHARACTER
THE_TIMESTAMP NEXT * , O(") CHARACTER
COL3 NEXT * , O(") CHARACTER
COL4 NEXT * , O(") CHARACTER
COL5 NEXT * , O(") CHARACTER
COL6 NEXT * , O(") CHARACTER
COL7 NEXT * , O(") CHARACTER
COL8 NEXT * , O(") CHARACTER
COL9 NEXT * , O(") CHARACTER
COL10 NEXT * , O(") CHARACTER
COL11 NEXT * , O(") CHARACTER
COL12 NEXT * , O(") CHARACTER
Record 1: Rejected - Error on table taleLOCATION, column THE_TIMESTAMP.
ORA-01843: not a valid month
Table table.location:
0 Rows successfully loaded.
1 Row not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Space allocated for bind array: 198144 bytes(64 rows)
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records read: 1
Total logical records rejected: 1
Total logical records discarded: 0
Run began on Fri Dec 14 13:42:45 2007
Run ended on Fri Dec 14 13:42:45 2007
Elapsed time was: 00:00:00.13
CPU time was: 00:00:00.07

Similar Messages

  • Timestamp question in RS02

    Hi experts,
           If I use the transaction RS02 how can I know the timestamp that are using..? because I see that in our development system the screen have Timestamp (UTC).... but if I access the QA environment I see that in that transaction in the screen have 'Timestamp' without indicating me if is UTC or what timestamp....
    How can I check what Timestamp I'm using in that transaction in the QA environment??
    Thanks a lot four ur help!!

    Hi,
    1.For Time Stamp ...try table ROOSPRMSC it gives timestsamp details.
    In the se11 by choosing the menu path utilities->Runtime object->Display you can see the time stamp.
    2.In RSA7(Delta Queue) would help you.
    Br, /Gopi

  • The old Date Timestamp question

    Hi,
    In my prog I store the Date and Timestamp of a test result. I use them to populate a JTree. The JTree is populated according to the date and the timestamp gives me the test time. I have two methods that extract the Date and Timestamp and store them in a HashMap:
    HashMap<Integer,Date>
    HashMap<Integer,Timestamp>
    My problem is that I need to extract the date from the Timestamp in order to populate the tree correctly. Meaning that I want to add a child to a node relevant to its date.
    Example:
    2006-05-05 (Date) 2006-05-05 16:28:09.093(Timestamp)
    2006-06-05 (Date) 2006-06-05 16:44:09.093(Timestamp)
    Would populate the tree:
    2006-05-05
    |-> 6:28:09.093
    2006-06-05
    |->16:44:09.093
    The child nodes arent populated correctly.
    Your help is appreciated

    You may need to remember that the division of a timestamp into date and time is dependant on the time zone, and by default the time zone where you're running the program will be used. (Midnight being at different times in different places).

  • TimeStamp - Question/Help

    Hello,
    Example:
    A driver Times In and Times Out using a Timestamp.
    I Subtract the TimeStamps to see how many Hours the Driver Worked for that shift.
    05-DEC-10 06.00.00.000000 AM - 04-DEC-10 12.00.00.000000 PM = +000000000 18:00:00.000000
    (DRIVER.TIMEOUT - DRIVER.TIMEIN) = HOURS_WORKED
    This works out great, but if the driver works the entire week, I need to know the total amount of total hours worked
    I was wondering if it's possible to add The Total Amount of Hours Generated.
    In other words how do I add (HOURS_WORKED) to get the total ????
    ThankYou

    Hi,
    If you're using an aggregate function (such as SUM), then everything in the SELECT clause has to be
    (a) an aggregate,
    (b) a GROUP BY exoression,
    (c) a constant, or
    (d) something derived from the above.
    If you want to display the emp_num, then either use an aggregate function (such as MIN (emp_num)) or GROUP BY emp_num, like this:
    SELECT    emp_num
    ,       SUM ( CAST (time_out AS DATE)
                  - CAST (time_in  AS DATE)
                  ) * 24       AS hours_worked
    FROM      dss_snow_program
    WHERE       emp_num     IN ('2')
    AND       time_out     >= TO_TIMESTAMP ( '01-Dec-2010'
                             , 'DD-Mon-YYYY'
    AND       time_out     <  TO_TIMESTAMP ( '08-Dec-2010'     -- This date is NOT included
                             , 'DD-Mon-YYYY'
    GROUP BY  emp_num
    ORDER BY  emp_num
    ;Don't compare a TIMESTAMP to a string.
    Using 2-digit years is asking for trouble. Sometimes you get what you ask for.

  • XSU time/timestamp question

    I have a problem with mapping from a xml element of type time into a oracle column using XSU.
    If the xml node is defined as of type time(and populated as example:"20:00:00") what is the
    and if oracle column is defined as 'duration' or something; there is problem with direct mapping into oracle column.
    What is the standard column type used on oracle for xml node of type time??

    I have a problem with mapping from a xml element of type time into a oracle column using XSU.
    If the xml node is defined as of type time(and populated as example:"20:00:00") what is the
    and if oracle column is defined as 'duration' or something; there is problem with direct mapping into oracle column.
    What is the standard column type used on oracle for xml node of type time??

  • OAS timestamp question

    In Oracle application Server iIs there a deployment option what would force all the files when deployed on the web server to keep the modified date of the file in the .war archive as opposed to use the current server time?

    I dont think that's posible, because this is controled by the OS and files are created as new objects in the file system, for tha matter should have the date of creation.
    At least Im not aware of something for that.
    Greetings.

  • How to edit copa data source or change delta method.Need t-code/procedure

    Hi gurus,
    I have a copa extractor in production.The copa extraction is costing based.
    I need to get an extra field from CE1**** table. when i go to KEB0 and display , i see my required field under the chracteristics from segment table. It is unchecked.
    I want to know the transaction where i can go and  edit the datasource to check that field and bring that data in.
    I tried going to T-code KEDV. But i couldnt figure out what to do there?
    I also tried deleting and recreating the datasource, then i am able to check that field. The problem is earlier existing data source used to say,
    Delta Method     Time Stamp Management in Profitability Analysis
    now it says
    Delta Method         Generic Delta.
    how do i change the delta method from generic delta to Time Stamp Management in Profitbility Analysis. During creation in KEB0 it doesnt give any option to change.
    and what exactly is the differnce between both delta methods?
    plz post your inputs.
    Thanks in advance,
    > Points will be assigned for inputs
    Message was edited by:
            ravi a
    Message was edited by:
            ravi a

    Hi Ravi,
    I would expect you are communicating to your users that you will need some downtime for you to change this.  To add characteristics to COPA you actually have to first the delete the datasource in KEB0 then re-create the datasource.  I would work with your CO-PA config team to get the T-Codes necessary to assign those objects to PA.
    Please reference OSS note 392635 for further reference on your CO-PA timestamp question.  This should answer your questions here.
    Pls. assign pts if this helps.
    Thanks,
    -Alex

  • A silly question about oracle.sql.timestamp and java.sql.timestamp

    Hi,
    I'm looking at a method that takes objects of type Object and does stuff if the object is really a java.sql.timestamp. If it is not then an error is flagged. In my case it flags an error when an object of type oracle.sql.timestamp is passed to it. Not really entirely comfortable with java (i'm still learning it), here's my stupid question :- why isn't oracle.sql.timestamp a subclass of java.sql.timestamp? Also in various books it indicates that java.sql.timestamp maps to oracle.sql.timestamp. Does that mean you have to physically do the mapping:
    i.e.
    java.sql.Timestamp t = new Timestamp( new oracle.sql.Timestamp( CURRENTTIMESTAMP ).timestampValue() );
    or is there something else to it.
    Thanks.
    Harold.

    The best forum for this is probably Forum Home » Java » SQLJ/JDBC
    Presumably you are refering to oracle.sql.TIMESTAMP. While this is intended to (and does) correspond to java.sql.Timestamp it can't be a subclass because it needs to be a subclass of oracle.sql.Datum.

  • TCP1323Opts question - TCP Timestamps

    Hi,
    We have to be PCI-DSS compliant and have several Windows servers running ISA and TMG.
    We have:
    Win 2K with ISA 2000 (on it's way out)
    Win 2K3 with ISA 2006
    Win 2K8 R2 with TMG 2010
    All of these servers, in the registry have TCP1323Opts set to '0' as per
    http://technet.microsoft.com/en-us/library/cc938205.aspx to disable TCP Timestamps.
    This is confirmed using Netsh where RFC 1323 Timestamps : disabled
    However, for PCI-DSS compliance we have to run vulnerability scans.
    Although only informational, all these servers come back as giving Timestamp replies.
    Although vulnerabilities due to this are minimal, from the timestamp is can be calculated how long a server has been running and therefore you can work out if it is missing the latest patches due to a lack of a reboot.
    I'm mainly puzzled as to why this is showing up when it is meant to be disabled.
    I've searched high and low across the Internet and can't find anything apart from the instructions as to how to change that reg entry.
    Do I need to do anything extra for the driver or something?
    Any help appreciated,
    Adrian

    Hi,
    Thanks for the post.
    Please check if you add the Tcp1323Opts registry key as follows:
    Tcp1323Opts
    Key: Tcpip\Parameters
    Value Type: REG_DWORD—number (flags)
    Valid Range: 0 or 2
    0 (disable the use of the TCP timestamps option)
    2 (enable the use of the TCP timestamps option)
    Default: No value.
    Description:
    This value controls the use of the RFC 1323 TCP Timestamp option. The default behavior of the TCP/IP stack is to not use the Timestamp options when initiating TCP connections, but use them if the TCP peer that is initiating
    communication includes them in their synchronize (SYN) segment.
    For more information about TCP/IP Registry Values, you could access this link:
    http://download.microsoft.com/download/c/2/6/c26893a6-46c7-4b5c-b287-830216597340/tcpip_reg.doc
    Hope this helps.
    Miles
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Simple question about systemd console output - can i get a timestamp?

    hello. with the old syslog program, sending the log to a console used to yield essentially the same output as /var/log/everything.log, critically including timestamps before each entry. Now with systemd, enabling console output just gives each entry by itself, so you can't tell if you're looking at 5 seconds worth of activity or 5 days. any way a timestamp can be added here? i'd find that useful on my servers, as I have a screen connected to them but no keyboard.
    thanks

    Console output of what? Please post the exact command you're using and the output.
    # journalctl -b
    -- Logs begin at Sun 2013-08-11 17:23:43 CEST, end at Wed 2013-09-11 05:36:39 CEST. --
    Sep 10 19:11:44 localhost systemd-journal[36]: Runtime journal is using 184.0K (max 49.8M, leaving 74.8M of free 498.4M, current limit 49.8M).
    Sep 10 19:11:44 localhost systemd-journal[36]: Runtime journal is using 188.0K (max 49.8M, leaving 74.8M of free 498.4M, current limit 49.8M).
    Sep 10 19:11:44 localhost kernel: Initializing cgroup subsys cpuset
    Sep 10 19:11:44 localhost kernel: Initializing cgroup subsys cpu
    Sep 10 19:11:44 localhost kernel: Initializing cgroup subsys cpuacct
    Sep 10 19:11:44 localhost kernel: Linux version 3.11.0-1-ARCH (tobias@testing-i686) (gcc version 4.8.1 20130725 (prerelease) (GCC) ) #1 SMP PREEMPT Tue Sep 3 0
    Sep 10 19:11:44 localhost kernel: e820: BIOS-provided physical RAM map:
    Sep 10 19:11:44 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable
    Sep 10 19:11:44 localhost kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved
    Sep 10 19:11:44 localhost kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved
    Sep 10 19:11:44 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003f6effff] usable
    Sep 10 19:11:44 localhost kernel: BIOS-e820: [mem 0x000000003f6f0000-0x000000003f6fafff] ACPI data
    Sep 10 19:11:44 localhost kernel: BIOS-e820: [mem 0x000000003f6fb000-0x000000003f6fffff] ACPI NVS
    Sep 10 19:11:44 localhost kernel: BIOS-e820: [mem 0x000000003f700000-0x000000003f77ffff] usable
    Sep 10 19:11:44 localhost kernel: BIOS-e820: [mem 0x000000003f780000-0x000000003fffffff] reserved
    Sep 10 19:11:44 localhost kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved
    Sep 10 19:11:44 localhost kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
    Sep 10 19:11:44 localhost kernel: BIOS-e820: [mem 0x00000000ff800000-0x00000000ffbfffff] reserved
    Sep 10 19:11:44 localhost kernel: BIOS-e820: [mem 0x00000000fffffc00-0x00000000ffffffff] reserved
    <cut>
    Sep 10 19:11:56 black kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
    Sep 10 19:11:56 black systemd-logind[164]: Watching system buttons on /dev/input/event2 (Power Button)
    Sep 10 19:11:56 black systemd-logind[164]: Watching system buttons on /dev/input/event1 (Power Button)
    Sep 10 19:12:00 black login[167]: pam_unix(login:session): session opened for user karol by LOGIN(uid=0)
    Sep 10 19:12:00 black systemd[1]: Starting user-1000.slice.
    Sep 10 19:12:00 black systemd[1]: Created slice user-1000.slice.
    Sep 10 19:12:00 black systemd[1]: Starting User Manager for 1000...
    Sep 10 19:12:00 black systemd-logind[164]: New session 1 of user karol.
    Sep 10 19:12:00 black systemd[1]: Starting Session 1 of user karol.
    Sep 10 19:12:00 black systemd[191]: pam_unix(systemd-shared:session): session opened for user karol by (uid=0)
    Sep 10 19:12:00 black systemd[1]: Started Session 1 of user karol.
    Sep 10 19:12:00 black login[167]: LOGIN ON tty1 BY karol
    Sep 10 19:12:00 black systemd[191]: Failed to open private bus connection: Failed to connect to socket /run/user/1000/dbus/user_bus_socket: No such file or dir
    Sep 10 19:12:00 black systemd[191]: Mounted /sys/kernel/config.
    Sep 10 19:12:01 black systemd[191]: Stopped target Sound Card.
    Sep 10 19:12:01 black systemd[191]: Starting Default.
    Sep 10 19:12:01 black systemd[191]: Reached target Default.
    Sep 10 19:12:01 black systemd[191]: Startup finished in 619ms.
    Sep 10 19:12:01 black systemd[1]: Started User Manager for 1000.
    Sep 10 19:12:00 black dhcpcd[168]: eth0: leased 192.168.1.4 for 259200 seconds
    Sep 10 19:12:00 black dhcpcd[168]: eth0: adding host route to 192.168.1.4 via 127.0.0.1
    Sep 10 19:12:00 black dhcpcd[168]: eth0: adding route to 192.168.1.0/24
    Sep 10 19:12:00 black dhcpcd[168]: eth0: adding default route via 192.168.1.1
    <cut>
    (No idea why there's 'Sep 10 19:12:01' followed by 'Sep 10 19:12:00')

  • SQLLDR Question - Load 2004/02/17 14:53:12 into a TIMESTAMP(6)

    Hi All
    any idea how I'd load a date/time in the format '2004/02/17 14:53:12' into a TIMESTAMP(6) column
    TIA
    Bill

    Here an example :
    $ cat test.dat
    2004/02/17 14:53:12
    $ cat test.ctl
    LOAD DATA
       INFILE 'test.dat'
       append
       INTO TABLE test
    (test_date timestamp "yyyy/mm/dd hh24:mi:ssxff")
    $ sqlldr scott/tiger control=test.ctl
    SQL*Loader: Release 9.2.0.4.0 - Production on Mon Nov 7 10:53:40 2005
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    Commit point reached - logical record count 1
    $ sqlplus scott/tiger
    SQL*Plus: Release 9.2.0.4.0 - Production on Mon Nov 7 10:53:48 2005
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    Connected to:
    Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.4.0 - Production
    SQL> desc test
    Name                                      Null?    Type
    TEST_DATE                                          TIMESTAMP(6)
    SQL> select * from test;
    TEST_DATE
    17-FEB-04 14:53:12,000000
    SQL>

  • URGENT HELP NEEDED FOR TimeStamp

    Urgent Millisecond Question....
    I have the Java Code which used to work well in Oracle 8 and Sybase ..
    When I am using it with Oracle 9.2 it creating a problem...
    The code is
    final public JDatetime getJDatetime(int columnIndex) throws SQLException {
         boolean convertb = Util.needConvertTime();
         Timestamp ts=_rs.getTimestamp(columnIndex);
         if(ts==null) return null;
         Date d= new Date(ts.getTime() + (ts.getNanos()/1000000));
         if(convertb) d = Util.ReferenceTZ2Local(d);
         return new JDatetime(d);
    Now in Oracle 8 Say when I insert a
    JDateTime Value as 2003-06-18 16:51:06.89
    and
    When I retrieve is using above getJDatetime
    it get retrieved as
    2003-06-18 16:51:06.0
    Which is ok since Milliseconds are lost....
    Now in Oracle 9
    When I use the convert
    Date d= new Date(ts.getTime() + (ts.getNanos()/1000000));
    It get converted to
    Original Value While Inserting -->TimeStamp in JResultSet->2003-06-18 18:15:56.42
    Date in JResultSet-->Wed Jun 18 18:15:56 GMT 2003
    Date in JResultSet after converting to ReferenceTZ
    Wed Jun 18 18:15:56 GMT 2003
    DateTime in JResultSet after converting to DateTime6/18/03 6:15:56.840 PM
    GMTGETDatetime 6/18/03 6:15:56.840 PM GMT
    so if you see
    Milliseconnd 42 got converted to 840 NanoSeconds
    WHICH IS WRONG
    Can anybody help me with it ??
    Mahesh

    The only Adobe program I know that can edit images is Photoshop.
    If you have troubles with Google software, you need to post in the appropriate Google forum.

  • FMS 3.5 says 'Bad network data': error in handling RTMP extended timestamps / chunkSize?

    Hello all,
    For a client, I am working on a project where a live RTMP stream is published to an Adobe FMS 3.5.6 server from a java application, using Red5 0.9.1 RTMPClient code.
    This works fine, until the timestamp becomes higher than 0xFFFFFF after 4.6 hours, and the RTMP extended timestamp field starts being used. I have already found: when the extended timestamp was written after the header, the last 4 bytes of the data were being cut off. I have fixed this locally, and now the data being sent seems to me to be conformant to the spec. However, FMS still throws an error message in the core log and then kills the connection from the Red5 client. Here is the error message:
    This is the error message:
    2011-06-03     14:28:02     13060     (e)2611029     Bad network data; terminating connection : chunkstream error:message length 11893407 is longerthan max rtmp packet length     -
    2011-06-03     14:28:02     13060     (e)2631029     Bad network data; terminating connection : (Adaptor: _defaultRoot_, VHost: _defaultVHost_, IP: 127.0.0.1, App: live/_definst_, Protocol: rtmp, Client: 5290168480216205379, Handle: 2147942405) : 05 FF FF FF 00 13 = 09 01 00 00 00 01 00 01 01 ' 01 00 00 00 00 00 13 4 09 0 00 00 01 ! 9A & L 0F FA F6 12 , B4 A6 CE H 8A AB DC G BB d k 1B 9F ) 13 13 D2 9A E5 t 8 B8 8D 94 ! 8A AE F6 AF } " U 0 D3 Q EF FF ~ 8D 97 D9 FF BE A3 F3 C9 97 o 9D # F9 7F h A4 F7 } / FB & F1 DC 9C BF   BD D3 E7 CA 97 FE E2 B9 E4 F7 9E 1A F6 BA } C9 w FC _ / / w FE n EF D7 P 9C F4 BE 82 8E F7 | BE 97 B4 BB D7 FE ED I / FB D1 93 9A F9 X \ 85 BD DD I E3 4 E8 M 13 D3 " ) BE A9 92 E5 83 D4 B4 12 DE D5 A3 E6 F4 k DE BF Q 3 A0 g r A4 f D9 BD w * } F7 r 8A S 2 . AB BD EE ^ l f AF E1 0B $ AF 9D D7 - BF E8 ! D3 } D3 i E3 B8 F2 M A8 " B1 A5 EF s ] A5 BC 96 E5 u e X q D2 F1 r F9 i 92 b EE Z d F9 * A6 BB FD 17 w 4 DD 3 o u EB ] ] EF FE B5 B1 0A F2 A0 DD FD B2 98 DF E8 e F6 CB FD 96 V % A5 D5 k ] FD w EF AF k v AA E8 ! 9F / w BE FA 9A _ E F2 D3 , ? 17 } AD 7 EC B3   } 07 B5 | z { { A5 = 11 90 CF BF ; 4 FE EF 95 F7 E7 DF B9 , AF z 91 CF C9 BD DE CB { F5 17 } F2 E5 D7 DF z E6 [ 96 > Y m 9F EB AF DD D8 E8 v B9 A8 E9 % A7 | 1 CF 8B D Z k N DF F8 N FA S R FE . ~ CB A 9 E1 ) 8F 8E BB EC c 6 13 F1 AC FD FD FC 8A F7 F3 K B9 FA ^ / A4 FC B9 AA F6 DE C2 [ 1A E c r B3 BF E5 EC B5 x 94 FD . A9 t I Q % EA EC DE | K FE z A4 97 F9 " 1 0F CA FB F5 F5 p 9E 99 3 - ; B8 F4 F1 FF t A3 EC BC # DE AC 91 13 19 o < 06 F5 FD 7F 7 _ $ D B t B5 0D 8A C1 C1 BA 0B FE DB B7 83 _ } BD z F7 CB { FC M A9 8D = D5 B1 < 85 = EF E1 ; BA H y FC BC B4 C A2 D9 ` e E4 94 H 5 13 ' 93 93 8E E C2 1C R 97 9 X B7 FF 10 9F { ) F1 CF AB AC ] EE H A2 DE D3 C5 m F6 K A2 A7 A2 89 D2 z EB DF 97 ^ k 9E 99 BB E7 B6 97 w { ~ + C7 B2 } FE ' C4 | B6 o H DD r A8 9F DC FF F9 Q b l 93 T B6 EE FF 11 j CD s P C F1 3 R I F8 D8 R 9D 93 AA D5 + DE FC BE " B9 E1 ` CB BD 0F F5 C7 AA w CF 8D p 9A F7 g f N FF 84 B7 K Q 93 g E1 - D3 s } w v AE 96 98 ED CF BA E9 2 . f 99 95 97 o 13 CA F7 s e $ F4 B5 15 C4 A8 DE M F7 w \ 8D 00 C6 C2 b D3 / 7 w F2 ' BF CD 89 FF > D7 FB BC A2 S N FB A5 CD AF D3 F9 9D DF AE B5 17 CF 9D B7 , B9 9 ^ 7F [ 93 84 F7 } _ EA DF u \ 99 Z t E CA M EF 7 " AD FE 92 9E n 7F EB D8 C { 99 8B 9E w H BF B1 | g 9F F3 FA E1 - E5 CB BB x CF p 8B D2 w v EF w FA E2 F7 s C5 AC $ FC B4 DB BE G E4 DC F0 A0 96 F3 ! t DC FF % A5 CB A4 ^ AB D2 BD E7 9A E ' 08 + AF U 17 EB 8A w A7 N E4 A5 x 93 12 _ - ; 09 DD DF m 11 BE w \ } BA D3 t BC D9 97 9B C5 7F D8 H F1 D 7 8A ^ FA n F0 B8 W E6 84 5 - 8 B5 h o C4 F7 83 P 88 CB AE m t BB L 95 A9 s 90 A2 Y o DF K _ / l D2 D1 C9 91 ' E4 BD / / D 97 m BB E7 14 93 % C5 ; DD CF D8 : ~ B5 4 F FA U F0 8F w w DC FD 83 FC 13 EF w p DA A5 07 _ * - 1D 14 9D D5 84 F E6 F0 FF E4 15 w n A5 9F DE d AE F5 " - f D2 AE 96 1F # FA F1 x C1 L DF l M 06 8A E4 z DB 17 BA l DA e 15 CD 85 86 1F 09 82 h ] C6 { E7 C5 AF Z C5 B0 83 v D9 03 FC / ~      -
    The message for which the hex dump is displayed, is a video message of size 4925 bytes. Below is the basic logging in my application:
    *** Event sent to RTMP connector: Video - ts: 16777473 length: 4925. Waiting time: -57937, event timestamp: 16777473
    14:28:02.045 [RtmpPublisher-workerThread] DEBUG o.r.s.s.consumer.ConnectionConsumer - Message timestamp: 16777473
    14:28:02.045 [RtmpPublisher-workerThread] DEBUG o.r.s.n.r.codec.RTMPProtocolEncoder - Channel id: 5
    14:28:02.045 [RtmpPublisher-workerThread] DEBUG o.r.s.n.r.codec.RTMPProtocolEncoder - Last ping time for connection: -1
    14:28:02.045 [RtmpPublisher-workerThread] DEBUG o.r.s.n.r.codec.RTMPProtocolEncoder - Client buffer duration: 0
    14:28:02.046 [RtmpPublisher-workerThread] DEBUG o.r.s.n.r.codec.RTMPProtocolEncoder - Packet timestamp: 16777473; tardiness: -30892; now: 1307104082045; message clock time: 1307104051152, dropLiveFuturefalse
    14:28:02.046 [RtmpPublisher-workerThread] DEBUG o.r.s.n.r.codec.RTMPProtocolEncoder - !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!12b Wrote expanded timestamp field
    14:28:02.046 [NioProcessor-22] DEBUG o.r.server.net.rtmp.BaseRTMPHandler - Message sent
    I have captured the entire frame containing this message with wireshark, and annotated it a bit. You can find it here:
    http://pastebin.com/iVtphPgU
    The video message of 4925 bytes (hex 00 13 3D) is cut up into chunks of 1024 bytes (chunkSize 1024 set by Red5 client and sent to FMS). Indeed, after the 12-byte header and the 4-byte extended timestamp, there are 1024 bytes before the 1-byte header for the next chunk (hex C5). The chunks after that also contain 1024 bytes after the chunk header. This appears correct to me (though please correct me if I'm wrong).
    When we look at the error message in the core log, the hex dump displayed also contains 1024 bytes, but it starts from the beginning of the message header. The last 16 bytes of the message chunk itself are not shown.
    My question is this: is the hex dump in the error message always capped to 1024 bytes, or did FMS really read too little data?
    Something that may be of help, is the reported 'too long' message length 11893407. This corresponds to hex B5 7A 9F, which can also be found in the packet, namely at row 0c60 (I've annotated it as [b5 7a 9f]. This location is exactly 16 bytes after the start of the 4th chunk data, not really a place to look for timestamps.
    My assumptions during this bug hunting session were the following (would be nice if someone could validate these for me):
    - message length, as specified in the RTMP 12 and 8-bit headers, defines the total number of data bytes for the message, NOT including the header of the first message chunk, its extended timestamp field, or the 1-byte headers for subsequent chunks. The behaviour is the same whether or not the message has an extended timestamp.
    - chunk size, as set by the chunkSize message, defines the total number of data bytes for the chunk, not incuding the header or extended timestamp field. The behaviour is the same whether or not the message has an extended timestamp.
    I believe I've chased this problem as far as I can without having access to the FMS 3.5 code, or at least being able to crank up the debug logging to the per-message level. I realize it's a pretty detailed issue and a long shot, but being able to publish a stream continuously 24/7 is critical for the project.
    I would be very grateful if someone could have a look at this hex dump to see if the message itself is correct, and if so, to have a look at how FMS3.5.6 handles this.
    Don't hesitate to ask me for more info if it can help.
    Thanks in advance
    Davy Herben
    Solidity

    Hello,
    It took a bit longer than expected, but I have managed to create a minimal test application that will reproduce the error condition on all machines I've tested on. The application will simply read an H264 file and publish it to an FMS as a live stream. To hit the error condition faster, without having to wait 4.6 hours, the application will add a fixed offset to all timestamps before sending it to the FMS.
    I have created two files:
    http://www.solidity.be/publishtest.jar : Runnable java archive with all libraries built in
    http://www.solidity.be/publishtest.zip : Zip file containing sources and libraries
    You can run the jar as follows:
    java -jar publishtest.jar <inputFile> <server> <port> <application> <stream> <timestampOffset>
    - inputFile: path to an H264 input video file
    - server: hostname or IP of FMS server to publish to
    - port: port number to publish to (1935)
    - application: application to publish to (live)
    - stream: stream to publish to (output)
    - timestampOffset: nr of milliseconds to add to the timestamp of each event, in hexadecimal format. Putting FFFFFF here will cause the server to reject the connection immediately, while FFFF00 or FFF000 will allow the publishing to run for awhile before the FMS kills it
    Example of a complete command line:
    java -jar publishtest.jar /home/myuser/Desktop/movie.mp4 localhost 1935 live output FFF000
    Good luck with the bug hunting. Let me know if there is anything I can help you with.
    Kind regards,
    Davy Herben

  • Please answer these questions.....Urgent

    Q You are using Data Guard to ensure high availability. The directory structures on the primary and the standby hosts are different.
    Referring to the scenario above, what initialization parameter do you set up during configuration of the standby database?
    db_convert_dir_name
    db_convert_file_name
    db_dir_name_convert
    db_directory_convert
    db_file_name_convert
    Oracle 9i Administration, Question 1 of 12
    Q What facility does Oracle provide to detect chained and migrated rows after the proper tables have been created?
    The RDBMS cannot detect this. It must use regular export and import with compress=y to remove chained and migrated rows as part of the regular database.
    The UTLCHAIN utility
    The DBMS_REPAIR package
    The ANALYZE command with the LIST CHAINED ROWS option
    The DBMS_MIG_CHAIN built-in package
    Q While doing an export, the following is encountered:
    ORA-1628 ... max # extents ... reached for rollback segment ..
    Referring to the scenario above, what do you do differently so that the export is resumed even after getting the space allocation error?
    Use the RESUMABLE=Y option for the export.
    Run the export with the AUTO_ROLLBACK_EXTEND=Y option.
    Increase the rollback segment extents before running the export.
    Use THE RESUME=Y option for the export.
    Monitor the rollback segment usage while the export is running and increase it if it appears to be running out of space.
    Q
    The DBCA (Database Configuration Assistant) prompts the installer to enter the password for which default users?
    SYS and SYSTEM
    OSDBA and INTERNAL
    SYSOPER and INTERNAL
    SYS and INTERNAL
    SYSTEM and SYSDBA
    Q You are designing the physical database for an application that stores dates and times. This will be accessed by users from all over the world in different time zones. Each user needs to see the time in his or her time zone.
    Referring to the scenario above, what Oracle data type do you use to facilitate this requirement?
    DATE
    TIMESTAMP WITH TIME ZONE
    TIMESTAMP
    DATETIME
    TIMESTAMP WITH LOCAL TIME ZONE
    Q Which one of the following conditions prevents you from redefining a table online?
    The table has a composite primary key.
    The table is partitioned by range.
    The table's organization is index-organized.
    The table has materialized views defined on it.
    The table contains columns of data type LOB.
    Q An Oracle database administrator is upgrading from Oracle 8.1.7 to Oracle 9i.
    Referring to the scenario above, which one of the following scripts does the Oracle database administrator run after verifying all steps in the upgrade checklist?
    u8.1.7.sql
    u81700.sql
    u0900020.sql
    u0801070.sql
    u0817000.sql
    Q What command do you use to drop a temporary tablespace and the associated OS files?
    ALTER DATABASE TEMPFILE '/data/oracle/temp01.dbf' DROP;
    ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP;
    ALTER DATABASE TEMPFILE '/data/oracle/temp01.dbf' DROP INCLUDING DATAFILES;
    ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP CASCADE;
    ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP INCLUDING CONTEN
    Q You wish to use a graphical interface to manage database locks and to identify blocking locks.
    Referring to the scenario above, what DBA product does Oracle offer that provides this functionality?
    Oracle Expert, a tool in the Oracle Enterprise Manager product
    Lock Manager, a tool in the base Oracle Enterprise Manager (OEM) product, as well as the console
    Lock Manager, a tool in Oracle Enterprise Manager's Tuning Pack
    The console of Oracle Enterprise Manager
    Viewing the Lock Manager charts of the Oracle Performance Manager, a tool in the Diagnostics Pack add on
    Q CREATE DATABASE abc
    MAXLOGFILES 5
    MAXLOGMEMBERS 5
    MAXDATAFILES 20
    MAXLOGHISTORY 100
    Referring to the code segment above, how do you change the MAX parameters shown?
    They can be changed using an ALTER SYSTEM command, but the database must be in the NOMOUNT state.
    The MAX parameters cannot be changed without exporting the entire database, re-creating it, and importing.
    They can be changed using an ALTER SYSTEM command while the database is open.
    They can be changed in the init.ora file, but the database must be restarted for the values to take effect.
    They cannot be changed unless you re-create your control file
    Q You need to change the archivelog mode of an Oracle database.
    Referring to the scenario above, what steps do you take before actually changing the archivelog mode?
    Execute the archive log list command
    Start up the instance and mount the database but do not open it.
    Start up the instance and mount and open the database in restricted mode.
    Kill all user sessions to ensure that there is no database activity that might trigger redolog activity.
    Take all tablespaces offline
    Q You are experiencing performance problems due to network traffic. One way to tune this is by setting the SDU size.
    Referring to the scenario above, why do you change the SDU size?
    A high-speed network is available where the data transmission effect is negligible.
    The application can be tuned to account for the delays.
    The requests to the database return small amounts of data as in an OLTP system.
    The data coming back from the server are fragmented into several packets.
    A large number of users are logged on concurrently to the system.
    Q When interpreting statistics from the v$sysstat, what factor do you need to keep in mind that can skew your statistics?
    Choice 1 The statistics are static and must be updated by running the analyze command to include the most recent activity.
    Choice 2 The statistics are only valid as a point in time snapshot of activity.
    Choice 3 The statistics gathered by v$sysstat include database startup activities and database activity that initially populates the database buffer cache and shared pool.
    Choice 4 The statistics do not include administrative users.
    Choice 5 The statistics gathered are based on individual sessions, so you must interpret them based on the activity and application in which the user was involved at the time you pull the statistics.
    Q When interpreting statistics from the v$sysstat, what factor do you need to keep in mind that can skew your statistics?
    Choice 1 The statistics are static and must be updated by running the analyze command to include the most recent activity.
    Choice 2 The statistics are only valid as a point in time snapshot of activity.
    Choice 3 The statistics gathered by v$sysstat include database startup activities and database activity that initially populates the database buffer cache and shared pool.
    Choice 4 The statistics do not include administrative users.
    Choice 5 The statistics gathered are based on individual sessions, so you must interpret them based on the activity and application in which the user was involved at the time you pull the statistics.
    Q You want to shut down the database, but you do not want client connections to lose any non-committed work. You also do not want to wait for every open session to disconnect.
    Referring to the scenario above, what method do you use to shut down the database?
    Choice 1 Shutdown abort
    Choice 2 Shutdown immediate
    Choice 3 Shutdown transactional
    Choice 4 Shutdown restricted sessions
    Choice 5 Shutdown normal
    Q What step or steps do you take to enable Automatic Undo Management (AUM)?
    Choice 1 Create the UNDO tablespace, then ALTER SYSTEM SET AUTO_UNDO.
    Choice 2 Use ALTER SYSTEM SET AUTO_UNDO; parameter.
    Choice 3 Add UNDO_MANAGEMENT=AUTO parameter to init.ora, stop/start the database.
    Choice 4 Add UNDO_AUTO to parameter to init.ora, stop/start the database, and create the UNDO tablespace.
    Choice 5 Add UNDO_MANAGEMENT=AUTO parameter to init.ora, create the UNDO tablespace, stop/start the database
    AUTOMATIC UNDO PARAMETER SETTINGS.
    Q What Oracle 9i feature allows the database administrator to create tablespaces, datafiles, and log groups WITHOUT specifying physical filenames?
    Choice 1 Dynamic SGA
    Choice 2 Advanced Replication
    Choice 3 Data Guard
    Choice 4 Oracle Managed Files
    Choice 5 External Tables
    Q What Oracle 9i feature allows the database administrator to create tablespaces, datafiles, and log groups WITHOUT specifying physical filenames?
    Choice 1 Dynamic SGA
    Choice 2 Advanced Replication
    Choice 3 Data Guard
    Choice 4 Oracle Managed Files
    Choice 5 External Tables
    Q What package is used to specify audit requirements for a given table?
    Choice 1 DBMS_TRACE
    Choice 2 DBMS_FGA
    Choice 3 DBMS_AUDIT
    Choice 4 DBMS_POLICY
    Choice 5 DBMS_OBJECT_AUDIT
    Q What facility does Oracle provide to detect chained and migrated rows after the proper tables have been created?
    Choice 1 The ANALYZE command with the LIST CHAINED ROWS option
    Choice 2 The RDBMS cannot detect this. It must use regular export and import with compress=y to remove chained and migrated rows as part of the regular database.
    Choice 3 The DBMS_MIG_CHAIN built-in package
    Choice 4 The DBMS_REPAIR package
    Choice 5 The UTLCHAIN utility
    Q What are the three functions of an undo segment?
    Choice 1 Rolling back archived redo logs, database recovery, recording user trace information
    Choice 2 The rollback segment has only one purpose, and that is to roll back transactions that are aborted.
    Choice 3 Rolling back uncommitted transactions, maintaining read consistency, logging processed SQL statements
    Choice 4 Rolling back transactions, maintaining read consistency, database recovery
    Choice 5 Rolling back transactions, recording Data Manipulation Language (DML) statements processed against the database, recording Data Definition Language (DDL) statements processed against the database
    Q Which one of the following describes locally managed tablespaces?
    Choice 1 Tablespaces within a Recovery Manager (RMAN) repository
    Choice 2 Tablespaces that are located on the primary server in a distributed database
    Choice 3 Tablespaces that use bitmaps within their datafiles, rather than data dictionaries, to manage their extents
    Choice 4 Tablespaces that are managed via object tables stored in the system tablespace
    Choice 5 External tablespaces that are managed locally within an administrative repository serving an Oracle distributed database or Oracle Parallel Server
    Q The schema in a database you are administering has a very complex and non-user friendly table and column naming system. You need a simplified schema interface to query and on which to report.
    Which one of the following mechanisms do you use to meet the requirement stated in the above scenario?
    Choice 1 Synonym
    Choice 2 Stored procedure
    Choice 3 Labels
    Choice 4 Trigger
    Choice 5
    View
    Q You need to change the archivelog mode of an Oracle database.
    Referring to the scenario above, what steps do you take before actually changing the archivelog mode?
    Choice 1 Start up the instance and mount the database but do not open it.
    Choice 2 Execute the archive log list command
    Choice 3 Kill all user sessions to ensure that there is no database activity that might trigger redolog activity.
    Choice 4 Take all tablespaces offline.
    Choice 5 Start up the instance and mount and open the database in restricted mode.
    Q The Oracle Internet Directory debug log needs to be changed to show the following events information.
    Given the Debug Event Types and their numeric values:
    Starting and stopping of different threads. Process related. - 4
    Detail level. Shows the spawned commands and the command-line arguments passed - 32
    Operations being performed by configuration reader thread. Configuration refresh events. - 64
    Actual configuration reading operations - 128
    Operations being performed by scheduler thread in response to configuration refresh events, and so on - 256
    What statement turns debug on for all of the above event types?
    Choice 1 oidctl server=odisrv debug=4 debug=32 debug=64 debug=128 debug=256 start
    Choice 2 oidctl server=odisrv debug="4,32,64,128,256" start
    Choice 3 oidctl server=odisrv flags="debug=4 debug=32 debug=64 debug=128 debug=256" start
    Choice 4 oidctl server=odisrv flags="debug=484" start
    Choice 5 oidctl server=odisrv flags="debug=4,32,64,128,256" start
    Q Which Data Guard mode has the lowest performance impact on the primary database?
    Choice 1 Instant protection mode
    Choice 2 Guaranteed protection mode
    Choice 3 Rapid protection mode
    Choice 4 Logfile protection mode
    Choice 5 Delayed protection mode
    Q In a DSS environment, the SALES data is kept for a rolling window of the past two years.
    Referring to the scenario above, what type of partitioning do you use for this data?
    Choice 1 Hash Partitioning
    Choice 2 Range Partitioning
    Choice 3 Equipartitioning
    Choice 4 List Partitioning
    Choice 5 Composite Partitioning
    Q What are the three main areas of the SGA?
    Choice 1 Log buffer, shared pool, database writer
    Choice 2 Database buffer cache, shared pool, log buffer
    Choice 3 Shared pool, SQL area, redo log buffer
    Choice 4 Log writer, archive log, database buffer
    Choice 5
    Database buffer cache, log writer, shared pool
    Q When performing full table scans, what happens to the blocks that are read into buffers?
    Choice 1 They are put on the MRU end of the buffer list by default.
    Choice 2 They are put on the MRU end of the buffer list if the NOCACHE clause was used while altering or creating the table.
    Choice 3 They are read into the first free entry in the buffer list.
    Choice 4 They are put on the LRU end of the buffer list if the CACHE clause was used while altering or creating the table.
    Choice 5 They are put on the LRU end of the buffer list by default
    Q Standard security policy is to force users to change their passwords the first time they log in to the Oracle database.
    Referring to the scenario above, how do you enforce this policy?
    Choice 1 Use the FORCE PASSWORD EXPIRE clause when the users are first created in the database.
    Choice 2 Ask the users to follow the standards and trust them to do so.
    Choice 3 Periodically compare the users' passwords with their initial password and generate a report of the users violating the standard.
    Choice 4 Use the PASSWORD EXPIRE clause when the users are first created in the database.
    Choice 5 Check the users' passwords after they first log in to see if they have changed it. If not, remind them to do so.
    Q What object privilege is necessary for a foreign key constraint to be created and enforced on the referenced table?
    Choice 1 References
    Choice 2 Alter
    Choice 3 Update
    Choice 4 Resource
    Choice 5 Select
    Q What command do you use to drop a temporary tablespace and the associated OS files?
    Choice 1 ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP INCLUDING CONTENTS
    Choice 2 ALTER DATABASE TEMPFILE '/data/oracle/temp01.dbf' DROP INCLUDING DATAFILES;
    Choice 3 ALTER DATABASE TEMPFILE '/data/oracle/temp01.dbf' DROP;
    Choice 4 ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP;
    Choice 5 ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP CASCADE;
    Q You need to implement a failover strategy using TAF. You do not have enough resources to ensure that your backup Oracle instance will be up and running in parallel with the primary.
    Referring to the scenario above, what failover mode do you use?
    Choice 1 FAILOVER_MODE=manual
    Choice 2 FAILOVER_MODE=none
    Choice 3 FAILOVER_MODE=auto
    Choice 4 FAILOVER_MODE=basic
    Choice 5 FAILOVER_MODE=preconnect
    Q An Oracle database used for an OLTP application is encountering the "snapshot too old" error.
    Referring to the scenario above, which database object or objects do you query in order to set the OPTIMAL parameter for the rollback segments?
    Choice 1 V$ROLLNAME and V$ROLLSTAT
    Choice 2 V$ROLLNAME
    Choice 3 V$ROLLSTAT
    Choice 4 DBA_ROLL and DBA_ROLLSTAT
    Choice 5 DBA_ROLLBACK_SEG
    QWhat are five background processes that must always be running in a functioning Oracle Instance?
    Choice 1 SMON (system monitor), PMON (process monitor), RECO (recoverer process), ARCH (archive process), CKPT (checkpoint process)
    Choice 2 DBW0 (database writer), SMON (system monitor), PMON (process monitor), LGWR (log writer), CKPT (checkpoint process)
    Choice 3 DBW0 (database writer), SMON (system monitor), PMON (process monitor), D000 (Dispatcher process), CKPT (checkpoint process)
    Choice 4 DBW0 (database writer), CKPT (checkpoint process), RECO (recoverer process), LGWR (log writer), ARCH (archive process)
    Choice 5 DBW0 (database writer), LGWR (log writer), ARCH (archive process), CKPT (checkpoint process), RECO (recoverer process)
    You have two large tables with thousands of rows. To select rows from the table_1, which are not referenced by an indexed common column (e.g. col_1) in table_2, you issue the following statement:
    select * from table_1
    where col_1 NOT in (select col_1 from table_2);
    This statement is taking a very long time to return its result set.
    Referring to the scenario above, which equivalent statement returns much faster?
    Choice 1
    select * from table_1
    where not exists (select * from table_2)
    Choice 2
    select * from table_2
    where col_1 not in (select col_1 from table_1)
    Choice 3
    select * from table_1
    where col_1 in (select col_1 from table_2 where col_1 = table_1.col_1)
    Choice 4
    select * from table_1
    where not exists (select 'x' from table_2 where col_1 = table_1.col_1)
    Choice 5
    select table_1.* from table_1, table_2
    where table_1.col_1 = table_2.col_1 (+)
    Performance is poor during peak transaction periods on a database you administer. You would like to view some statistics on areas such as LGWR (log writer) waits.
    Referring to the scenario above, what performance view do you query to access these statistics?
    Choice 1
    DBA_CATALOG
    Choice 2
    V$SESS_IO
    Choice 3
    V$SYSSTAT
    Choice 4
    V$PQ_SYSSTAT
    Choice 5
    V$SQLAREA
    You need to assess the performance of your shared pool at instance startup, but you cannot restart the database.
    Referring to the scenario above, how do you empty your SGA?
    Choice 1
    Execute $ORACLE_HOME/bin/db_shpool_flush
    Choice 2
    ALTER SYSTEM FLUSH SHARED_POOL
    Choice 3
    ALTER SYSTEM CLEAR SHARED POOL
    Choice 4
    DELETE FROM SYS.V$SQLAREA
    Choice 5
    DELETE FROM SYS.V$SQLTEXT
    You are reading the explain plan of a problem query and notice that full table scans are used with a HASH join.
    Referring to the scenario above, in what instance is a HASH join beneficial?
    Choice 1
    When joining two small tables--neither having any primary keys or unique indexes
    Choice 2
    When no indexes are present
    Choice 3
    When using the parallel query option
    Choice 4
    When joining two tables where one table may be significantly larger than the other
    Choice 5
    Only when using the rule-based optimizer
    An Oracle database administrator is upgrading from Oracle 8.1.7 to Oracle 9i.
    Referring to the scenario above, which one of the following scripts does the Oracle database administrator run after verifying all steps in the upgrade checklist?
    Choice 1
    u0817000.sql
    Choice 2
    u0900020.sql
    Choice 3
    u8.1.7.sql
    Choice 4
    u81700.sql
    Choice 5
    u0801070.sql
    You have a large On-Line Transaction Processing (OLTP) database running in archive log mode with two redo log groups that have two members each.
    Referring to the above scenario, to avoid stalling during peak activity periods, which one of the following actions do you take?
    Choice 1
    Add a third member to each of the groups.
    Choice 2
    Increase your LOG_CHECKPOINT_INTERVAL setting.
    Choice 3
    Turn off archive logging.
    Choice 4
    Add a third redo log group.
    Choice 5
    Turn off redo log multiplexing
    What object does a database administrator create to store precompiled summary data?
    Choice 1
    Replicated Table
    Choice 2
    Archive Log
    Choice 3
    Temporary Tablespace
    Choice 4
    Cached Table
    Choice 5
    Materialized View
    Which one of the following statements do you execute in order to find the current default temporary tablespace?
    Choice 1
    SELECT property_name, property_value FROM v$database_properties
    Choice 2
    show parameter curr_default_temp_tablespace
    Choice 3
    SELECT property_name, property_value FROM all_database_properties
    Choice 4
    SELECT property_name, property_value FROM database_properties
    Choice 5
    SELECT property_name, property_value FROM dba_database_properties
    In which one of the following situations do you use a bitmap index?
    Choice 1
    With column values that are guaranteed to be unique
    Choice 2
    With column values having a high cardinality
    Choice 3
    With column values having a consistently uniform distribution
    Choice 4
    With column values having a low cardinality
    Choice 5
    With column values having a non-uniform distribution
    A table has more than two million rows and, if exported, will exceed 4 GB in size with data, indexes, and constraints. The UNIX you are using has a 2 GB limit on file sizes. This table needs to be backed up using Oracle EXPORT.
    There are two ways this table can be exported and split into multiple files. One way is to use the UNIX pipe, split, and compress commands in conjunction with the Oracle EXPORT utility to generate multiple equally-sized files.
    Referring to the scenario above, what is the other way that you can export and split into multiple files?
    Choice 1
    Export the data into one file and the index into another file.
    Choice 2
    Use a WHERE clause with the export to limit the number of rows returned.
    Choice 3
    Vertically partition the table into sizes of less than 2 GB and then export each partition as a separate file.
    Choice 4
    Specify the multiple files in the FILE parameter and specify the FILESIZE in the EXPORT parameter file.
    Choice 5
    Horizontally partition the table into sizes of less than 2 GB and then export each partition as a separate file.
    Which one of the following statements describes the PASSWORD_GRACE_TIME profile setting?
    Choice 1
    It specifies the grace period, in days, for changing the password once expired.
    Choice 2
    It specifies the grace period, in days, for changing the password from the time it is initially set and the time the account is made active.
    Choice 3
    It specifies the grace period, in minutes, for changing the password once expired.
    Choice 4
    It specifies the grace period, in days, for changing the password after the first successful login after the password has expired.
    Choice 5
    It specifies the grace period, in hours, for changing the password once expired.
    In OEM, what color and icon are associated with a warning?
    Choice 1
    Yellow hexagon
    Choice 2
    Yellow flag
    Choice 3
    Red flag
    Choice 4
    Gray flag
    Choice 5
    Red hexagon
    What parameter in the SQLNET.ORA file specifies the order of the naming methods to be used?
    Choice 1
    NAMES.SEARCH_ORDER
    Choice 2
    NAMES.DOMAIN_HINTS
    Choice 3
    NAMES.DIRECTORY_PATH
    Choice 4
    NAMES.DOMAINS
    Choice 5
    NAMES.DIRECTORY
    An Oracle 9i database instance has automatic undo management enabled. This allows you to use the Flashback Query feature of Oracle 9i.
    Referring to the scenario above, what UNDO parameter needs to be set so that this feature allows consistent queries of data up to 90 days old?
    Choice 1
    UNDO_TABLESPACE
    Choice 2
    UNDO_TIMELIMIT
    Choice 3
    UNDO_MANAGEMENT
    Choice 4
    UNDO_FLASHBACKTO
    Choice 5
    UNDO_RETENTION
    An Oracle 9i database instance has automatic undo management enabled. This allows you to use the Flashback Query feature of Oracle 9i.
    Referring to the scenario above, what UNDO parameter needs to be set so that this feature allows consistent queries of data up to 90 days old?
    Choice 1
    UNDO_TABLESPACE
    Choice 2
    UNDO_TIMELIMIT
    Choice 3
    UNDO_MANAGEMENT
    Choice 4
    UNDO_FLASHBACKTO
    Choice 5
    UNDO_RETENTION
    DB_BLOCK_SIZE=8192
    DB_CACHE_SIZE=128M
    DB_2K_CACHE_SIZE=64M
    DB_4K_CACHE_SIZE=32M
    DB_8K_CACHE_SIZE=16M
    DB_16K_CACHE_SIZE=8M
    Referring to the initialization parameter settings above, what is the size of the cache of standard block size buffers?
    Choice 1
    8 M
    Choice 2
    16 M
    Choice 3
    32 M
    Choice 4
    64 M
    Choice 5
    128 M
    DB_CREATE_FILE_DEST='/u01/oradata/app01'
    DB_CREATE_ONLINE_LOG_DEST_1='/u02/oradata/app01'
    Referring to the sample code above, which one of the following statements is NOT correct?
    Choice 1
    Data files created with no location specified are created in the DB_CREATE_FILE_DEST directory.
    Choice 2
    Control files created with no location specified are created in the DB_CREATE_ONLINE_LOG_DEST_1 directory.
    Choice 3
    Redolog files created with no location specified are created in the DB_CREATE_ONLINE_LOG_DEST_1 directory.
    Choice 4
    Control files created with no location specified are created in the DB_CREATE_FILE_DEST directory.
    Choice 5
    Temp files created with no location specified are created in the DB_CREATE_FILE_DEST directory.
    LogMiner GUI is a part of which one of the following?
    Choice 1
    Oracle Enterprise Manager
    Choice 2
    Oracle LogMiner Plug-In
    Choice 3
    Oracle Diagnostics Pack
    Choice 4
    Oracle Performance Tuning Pack
    Choice 5
    Oracle LogMiner StandAlone GUI
    The schema in a database you are administering has a very complex and non-user friendly table and column naming system. You need a simplified schema interface to query and on which to report.
    Which one of the following mechanisms do you use to meet the requirement stated in the above scenario?
    Choice 1
    View
    Choice 2
    Trigger
    Choice 3
    Stored procedure
    Choice 4
    Synonym
    Choice 5
    Labels
    alter index gl.GL_JE_LINES_N1 rebuild
    You determine that an index has too many extents and want to rebuild it to avoid fragmentation performance degradation.
    When you issue the above scenario, where is the rebuilt index stored?
    Choice 1
    In the default tablespace for the login name you are using
    Choice 2
    You cannot rebuild an index. You must drop the existing index and re-create it using the create index statement.
    Choice 3
    In the system tablespace
    Choice 4
    In the same tablespace as it is currently stored
    Choice 5
    In the index tablespace respective to the data table on which the index is built
    Which one of the following describes locally managed tablespaces?
    Choice 1
    Tablespaces within a Recovery Manager (RMAN) repository
    Choice 2
    External tablespaces that are managed locally within an administrative repository serving an Oracle distributed database or Oracle Parallel Server
    Choice 3
    Tablespaces that are located on the primary server in a distributed database
    Choice 4
    Tablespaces that use bitmaps within their datafiles, rather than data dictionaries, to manage their extents
    Choice 5
    Tablespaces that are managed via object tables stored in the system tablespace
    Which method of database backup supports true incremental backups?
    Choice 1
    Export
    Choice 2
    Operating System backups
    Choice 3
    Oracle Enterprise Backup Utility
    Choice 4
    Incremental backups are not supported. You must use full or cumulative backups.
    Choice 5
    Recovery Manager
    You are using Data Guard to ensure high availability. The directory structures on the primary and the standby hosts are different.
    Referring to the scenario above, what initialization parameter do you set up during configuration of the standby database?
    Choice 1
    db_dir_name_convert
    Choice 2
    db_convert_dir_name
    Choice 3
    db_convert_file_name
    Choice 4
    db_directory_convert
    Choice 5
    db_file_name_convert
    Tablespace APP_INDX is put in online backup mode when redo log 744 is current. When APP_INDX is taken out of online backup mode, redo log 757 is current.
    Referring to the scenario above, if the backup is restored, what are the start and end redo logs used, in order, to perform a successful point-in-time recovery of APP_INDX?
    Choice 1
    Start Redo Log 744, End Redo Log 757
    Choice 2
    Start Redo Log 743, End Redo Log 756
    Choice 3
    Start Redo Log 745, End Redo Log 756
    Choice 4
    Start Redo Log 744, End Redo Log 756
    Choice 5
    Start Redo Log 743, End Redo Log 757
    You want to make new data entered or changed in a table adhere to a given integrity constraint, but data exist in the table that violates the constraint.
    Referring to the scenario above, what do you do?
    Choice 1
    Use an enabled novalidate constraint.
    Choice 2
    Use an enabled validate constraint.
    Choice 3
    Use a deferred constraint.
    Choice 4
    Use a disabled constraint.
    Choice 5
    You cannot enforce this type of constraint
    In Oracle 9i, the connect internal command has been discontinued.
    Referring to the text above, how do you achieve a privileged connection in Oracle 9i?
    Choice 1
    CONNECT <username> AS SYSOPER where username has DBA privileges.
    Choice 2
    CONNECT <username> as SYSDBA.
    Choice 3
    Connect using Enterprise Manager.
    Choice 4
    CONNECT sys.
    Choice 5
    Use CONNECT <username> as normal but include the user in the external password file.
    How many partitions can a table have?
    Choice 1
    64
    Choice 2
    255
    Choice 3
    1,024
    Choice 4
    65,535
    Choice 5
    Unlimited
    In Cache Fusion, when does a request by one process for a resource owned by another process fail?
    Choice 1
    When a null mode resource request is made for a resource already owned in exclusive mode by another process
    Choice 2
    When a shared mode resource request is made for a resource already owned in shared mode by another process
    Choice 3
    When a shared mode resource request is made for a resource already owned in null mode by another process
    Choice 4
    When an exclusive mode resource request is made for a resource already owned in null mode by another process
    Choice 5
    When an exclusive mode resource request is made for a resource already owned in shared mode by another process
    The Oracle Internet Directory debug log needs to be changed to show the following events information.
    Given the Debug Event Types and their numeric values:
    Starting and stopping of different threads. Process related. - 4
    Detail level. Shows the spawned commands and the command-line arguments passed - 32
    Operations being performed by configuration reader thread. Configuration refresh events. - 64
    Actual configuration reading operations - 128
    Operations being performed by scheduler thread in response to configuration refresh events, and so on - 256
    What statement turns debug on for all of the above event types?
    Choice 1
    oidctl server=odisrv flags="debug=4 debug=32 debug=64 debug=128 debug=256" start
    Choice 2
    oidctl server=odisrv debug="4,32,64,128,256" start
    Choice 3
    oidctl server=odisrv flags="debug=4,32,64,128,256" start
    Choice 4
    oidctl server=odisrv flags="debug=484" start
    Choice 5
    oidctl server=odisrv debug=4 debug=32 debug=64 debug=128 debug=256 start
    A new OFA-compliant database is being installed using the Oracle installer. The mount point being used is /u02.
    Referring to the scenario above, what is the default value for ORACLE_BASE?
    Choice 1
    /usr/app/oracle
    Choice 2
    /u02/oracle
    Choice 3
    /u02/app/oracle
    Choice 4
    /u01/app/oracle
    Choice 5
    /u02/oracle_base
    You need to start the Connection Manager Gateway and the Connections Admin processes.
    Referring to the scenario above, what command do you execute?
    Choice 1
    CMCTL START CM
    Choice 2
    CMCTL START CMADMIN
    Choice 3
    CMCTL START CMAN
    Choice 4
    CMCTL START CMGW
    Choice 5
    CMCTL START CMGW CMADM
    When performing full table scans, what happens to the blocks that are read into buffers?
    Choice 1
    They are read into the first free entry in the buffer list.
    Choice 2
    They are put on the MRU end of the buffer list if the NOCACHE clause was used while altering or creating the table.
    Choice 3
    They are put on the LRU end of the buffer list if the CACHE clause was used while altering or creating the table.
    Choice 4
    They are put on the LRU end of the buffer list by default.
    Choice 5
    They are put on the MRU end of the buffer list by default.
    You wish to take advantage of the Oracle datatypes, but you need to convert your existing LONG or LONG RAW columns to Character Large Object (CLOB) and Binary Large Object (BLOB) datatypes.
    Referring to the scenario above, what is the quickest method to use to perform this conversion?
    Choice 1
    Use the to_lob function when selecting data from the existing table into a new table.
    Choice 2
    Use the ALTER TABLE statement and MODIFY the column to the new LOB datatype.
    Choice 3
    You must export the existing data to external files and then re-import them as BFILE external LOBS.
    Choice 4
    Create a new table with the same columns but with the LONG or LONG RAW column changed to a CLOB or BLOB type. The next step is to INSERT INTO newtable select * from oldtable.
    Choice 5
    LONG and LONG RAW datatypes are not compatible with LOBS and cannot be converted within the Oracle database.
    You need to redefine the JOURNAL table in the stress test environment. You want to check first to see if it is possible to redefine this table online.
    Referring to the scenario above, what statement do you execute that checks whether or not the JOURNAL table can be redefined online if you are connected as the table owner?
    Choice 1
    Execute DBMS_REDEFINITION.CHECK_TABLE_REDEF(USER,'JOURNAL');
    Choice 2
    Execute DBMS_REDEFINITION.VERIFY_REDEF_TABLE(USER,'JOURNAL');
    Choice 3
    Execute DBMS_REDEFINITION.CAN_REDEF_TABLE(USER,'JOURNAL');
    Choice 4
    Execute DBMS_REDEFINITION.START_REDEF_TABLE(USER,'JOURNAL');
    Choice 5
    Execute DBMS_REDEFINITION.SYNC_INTERIM_TABLE(USER,'JOURNAL');
    An Oracle 9i database instance has automatic undo management enabled. This allows you to use the Flashback Query feature of Oracle 9i.
    Referring to the scenario above, what UNDO parameter needs to be set so that this feature allows consistent queries of data up to 90 days old?
    Choice 1
    UNDO_TIMELIMIT
    Choice 2
    UNDO_MANAGEMENT
    Choice 3
    UNDO_RETENTION
    Choice 4
    UNDO_TABLESPACE
    Choice 5
    UNDO_FLASHBACKTO
    Which one of the following procedures is used for the extraction of the LogMiner dictionary?
    Choice 1
    DBMS_LOGMNR_D.EXTRACT
    Choice 2
    DBMS_LOGMNR.BUILD
    Choice 3
    DBMS_LOGMINER_D.BUILD
    Choice 4
    DBMS_LOGMNR_D.BUILD_DICT
    Choice 5
    DBMS_LOGMNR_D.BUILD
    set pause on;
    column sql_text format a35;
    select sid, osuser, username, sql_text
    from v$session a, v$sqlarea b
    where a.sql_address=b.address
    and a.sql_hash_value=b.hash_value
    Why is the SQL*Plus sample code segment above used?
    Choice 1
    To view full text search queries by issuing user
    Choice 2
    To list all operating system users connected to the database
    Choice 3
    To view SQL statements issued by connected users
    Choice 4
    To detect deadlocks
    Choice 5
    To view paused database sessions
    When dealing with very large tables in which the size greatly exceeds the size of the System Global Area (SGA) data block buffer cache, which one of the following operations must be avoided?
    Choice 1
    Group operations
    Choice 2
    Aggregates
    Choice 3
    Index range scans
    Choice 4
    Multi-table joins
    Choice 5
    Full table scans
    You are reading the explain plan of a problem query and notice that full table scans are used with a HASH join.
    Referring to the scenario above, in what instance is a HASH join beneficial?
    Choice 1
    Only when using the rule-based optimizer
    Choice 2
    When joining two small tables--neither having any primary keys or unique indexes
    Choice 3
    When no indexes are present
    Choice 4
    When joining two tables where one table may be significantly larger than the other
    Choice 5
    When using the parallel query option
    Performance is poor during peak transaction periods on a database you administer. You would like to view some statistics on areas such as LGWR (log writer) waits.
    Referring to the scenario above, what performance view do you query to access these statistics?
    Choice 1
    V$SQLAREA
    Choice 2
    V$SYSSTAT
    Choice 3
    V$SESS_IO
    Choice 4
    V$PQ_SYSSTAT
    Choice 5
    DBA_CATALOG
    What security feature allows the database administrator to monitor successful and unsuccessful attempts to access data?
    Choice 1
    Autotrace
    Choice 2
    Fine-Grained Auditing
    Choice 3
    Password auditing
    Choice 4
    sql_trace
    Choice 5
    tkprof
    You need to configure a default domain that is automatically appended to any unqualified net service name.
    What Oracle-provided network configuration tool do you use to accomplish the above task?
    Choice 1
    Oracle Names Control Utility
    Choice 2
    Configuration File Utility
    Choice 3
    Oracle Network Configuration Assistant
    Choice 4
    Listener Control Utility
    Choice 5
    Oracle Net Manager
    You are experiencing performance problems due to network traffic. One way to tune this is by setting the SDU size.
    Referring to the scenario above, why do you change the SDU size?
    Choice 1
    The requests to the database return small amounts of data as in an OLTP system.
    Choice 2
    The application can be tuned to account for the delays.
    Choice 3
    The data coming back from the server are fragmented into several packets.
    Choice 4
    A large number of users are logged on concurrently to the system.
    Choice 5
    A high-speed network is available where the data transmission effect is negligible.
    You have partitioned the table ORDER on the ORDERID column using range partitioning. You want to create a locally partitioned index on this table. You also want this index to be unique.
    Referring to the scenario above, what is required for the creation of this unique locally partitioned index?
    Choice 1
    A unique partitioned index on a table cannot be local.
    Choice 2
    There can be only one unique locally partitioned index on the table.
    Choice 3
    The index has to be equipartitioned.
    Choice 4
    The table's primary key columns should be included in the index key.
    Choice 5
    The ORDERID column has to be part of the index's key.
    You have a large On-Line Transaction Processing (OLTP) database running in archive log mode with two redo log groups that have two members each.
    Referring to the above scenario, to avoid stalling during peak activity periods, which one of the following actions do you take?
    Choice 1
    Turn off redo log multiplexing.
    Choice 2
    Increase your LOG_CHECKPOINT_INTERVAL setting.
    Choice 3
    Add a third member to each of the groups.
    Choice 4
    Add a third redo log group.
    Choice 5 Turn off archive logging
    When transporting a tablespace, the tablespace needs to be self-contained.
    Referring to the scenario above, in which one of the following is the tablespace self-contained?
    Choice 1 A referential integrity constraint points to a table across a set boundary.
    Choice 2 A partitioned table is partially contained in the tablespace.
    Choice 3 An index inside the tablespace is for a table outside of the tablespace.
    Choice 4 A corresponding index for a table is outside of the tablespace.
    Choice 5 A table inside the tablespace contains a LOB column that points to LOBs outside the tablespace.
    You have experienced a database failure requiring a full database restore. Downtime is extremely costly, as is any form of data loss. You run the database in archive log mode and have a full database backup from three days ago. You have a database export from last night. You are not running Oracle Parallel Server (OPS).
    Referring to the above scenario, how do you minimize downtime and data loss?
    Choice 1 Import the data from the export using direct-path loading.
    Choice 2 Create a standby database and activate it.
    Choice 3 Perform a restore of necessary files and use parallel recovery operations to speed the application of redo entries.
    Choice 4 Conduct a full database restore and bring the database back online immediately. Apply redo logs during a future maintenance window.
    Choice 5 Perform a restore and issue a recover database command
    You have two large tables with thousands of rows. To select rows from the table_1, which are not referenced by an indexed common column (e.g. col_1) in table_2, you issue the following statement:
    select * from table_1
    where col_1 NOT in (select col_1 from table_2);
    This statement is taking a very long time to return its result set.
    Referring to the scenario above, which equivalent statement returns much faster?
    Choice 1 select * from table_1
    where col_1 in (select col_1 from table_2 where col_1 = table_1.col_1)
    Choice 2 select * from table_2
    where col_1 not in (select col_1 from table_1)
    Choice 3 select * from table_1
    where not exists (select 'x' from table_2 where col_1 = table_1.col_1)
    Choice 4 select table_1.* from table_1, table_2
    where table_1.col_1 = table_2.col_1 (+)
    Choice 5 select * from table_1
    Which one of the following initialization parameters is obsolete in Oracle 9i?
    Choice 1 LOG_ARCHIVE_DEST
    Choice 2 GC_FILES_TO_LOCKS
    Choice 3 FAST_START_MTTR_TARGET
    Choice 4 DB_BLOCK_BUFFERS
    Choice 5 DB_BLOCK_LRU_LATCHES
    You find that one of your tablespaces is running out of disk space.
    Referring to the scenario above, which one of the following is NOT a valid option to increase the space available to the tablespace?
    Choice 1 Move some segments to other tablespaces.
    Choice 2 Resize an existing datafile in the tablespace.
    Choice 3 Add another datafile to the tablespace.
    Choice 4 Increase the MAX_EXTENTS for the tablespace.
    Choice 5 Turn AUTOEXTEND on for one or more datafiles in the tablespace.
    What tools or utilities do you use to transfer the data dictionary's structural information of transportable tablespaces?
    Choice 1 DBMS_TTS
    Choice 2 SQL*Loader
    Choice 3 Operating System copy commands
    Choice 4 DBMS_STATS
    Choice 5 EXP and IMP
    Which one of the following, if backed up, is potentially problematic to a complete recovery?
    Choice 1
    Control file
    Choice 2
    System Tablespace
    Choice 3
    Data tablespaces
    Choice 4
    Online Redo logs
    Choice 5
    All archived redologs after the last backup
    Your database warehouse performs frequent full table scans. Your DB_BLOCK_SIZE is 16,384.
    Referring to the scenario above, what parameter do you use to reduce disk I/O?
    Choice 1 LOG_CHECKPOINT_TIMEOUT
    Choice 2 DBWR_IO_SLAVES
    Choice 3 DB_FILE_MULTIBLOCK_READ_COUNT
    Choice 4 DB_WRITER_PROCESSES
    Choice 5 DB_BLOCK_BUFFERS
    Which one of the following describes the "Reset database to incarnation" command used by Recovery Manager?
    Choice 1 It performs a resynchronization of online redo logs to a given archive log system change number (SCN).
    Choice 2 It performs point-in-time recovery when using Recovery Manager.
    Choice 3 It restores the database to the initial state in which it was found when first backing it up via Recovery Manager.
    Choice 4 It restores the database to a save point as defined by the version control number or incarnation number of the database.
    Choice 5 It is used to undo the effect of a resetlogs operation by restoring backups of a prior incarnation of the database.
    You are using the CREATE TABLE statement to populate the data dictionary with metadata to allow access to external data, where /data is a UNIX writable directory and filename.dbf is an arbitrary name.
    Referring to the scenario above, which clause must you add to your CREATE TABLE statement?
    Choice 1
    organization external
    Choice 2 external file /data/filename.dbf
    Choice 3 ON /data/filename.dbf
    Choice 4 organization file
    Choice 5 file /data/filename.dbf
    Your business user has expressed a need to be able to revert back to data that are at most eight hours old. You decide to use Oracle 9i's FlashBack feature for this purpose.
    Referring to the scenario above, what is the value of UNDO_RETENTION that supports this requirement?
    Choice 1 480
    Choice 2 8192
    Choice 3 28800
    Choice 4 43200
    Choice 5 28800000
    Materialized Views constitute which data warehousing feature offered by Oracle?
    Choice 1 FlashBack Query
    Choice 2 Summary Management
    Choice 3 Dimension tables
    Choice 4 ETL Enhancements
    Choice 5 Updateable Multi-table Views
    DB_BLOCK_SIZE=8192
    DB_CACHE_SIZE=128M
    DB_2K_CACHE_SIZE=64M
    DB_4K_CACHE_SIZE=32M
    DB_8K_CACHE_SIZE=16M
    DB_16K_CACHE_SIZE=8M
    Referring to the initialization parameter settings above, what is the size of the cache of standard block size buffers?
    Choice 1 8 M
    Choice 2 16 M
    Choice 3 32 M
    Choice 4 64 M
    Choice 5 128 M
    You need to send listener log information to the Oracle Support Services. The listener name is LSNRORA1.
    Referring to the scenario above, which one of the following statements do you use in the listener.ora file to generate this log information?
    Choice 1 TRACE_LEVEL_LSNRORA1=debug
    Choice 2 TRACE_LEVEL_LSNRORA1=admin
    Choice 3 TRACE_LEVEL_LSNRORA1=5
    Choice 4 TRACE_LEVEL_LSNRORA1=support
    Choice 5 TRACE_LEVEL_LSNRORA1=on
    Which one of the following statements causes you to choose the NOARCHIVELOG mode for an Oracle database?
    Choice 1
    The database does not need to be available at all times.
    Choice 2
    The database is used for a DSS application, and updates are applied to it once in 48 hours.
    Choice 3
    The database needs to be available at all times.
    Choice 4
    It is unacceptable to lose any data if a disk failure damages some of the files that constitute the database.
    Choice 5
    There will be times when you will need to recover to a point-in-time that is not current.
    You are experiencing performance problems due to network traffic. One way to tune this is by setting the SDU size.
    Referring to the scenario above, why do you change the SDU size?
    Choice 1 A large number of users are logged on concurrently to the system.
    Choice 2 A high-speed network is available where the data transmission effect is negligible.
    Choice 3 The data coming back from the server are fragmented into several packets.
    Choice 4 The application can be tuned to account for the delays.
    Choice 5 The requests to the database return small amounts of data as in an OLTP system.

    Post a few if you need answers to a few.
    Anyway, my best shot:-
    Q. Directories are different
    A. Use db_file_name_convert why? read about it.
    Q What facility does Oracle provide to detect chained and migrated rows after the proper tables have been created?
    A.The ANALYZE command with the LIST CHAINED ROWS option
    Q While doing an export, the following is encountered:
    my best guess
    Use the RESUMABLE=Y option for the export.
    Q. The DBCA (Database Configuration Assistant) prompts the installer to enter the password for which default users?
    A. SYS and SYSTEM
    Q You are designing the physical database for an application that stores dates and times. This will be accessed by users from all over the world in different time zones. Each user needs to see the time in his or her time zone.
    A. TIMESTAMP WITH LOCAL TIME ZONE
    Q What command do you use to drop a temporary tablespace and the associated OS files?
    A. ALTER DATABASE TEMPFILE '/data/oracle/temp01.dbf' DROP INCLUDING DATAFILES;
    Q You wish to use a graphical interface to manage database locks and to identify blocking locks.
    A. Lock Manager, a tool in the base Oracle Enterprise Manager (OEM) product, as well as the console
    Q CREATE DATABASE abc
    A. They cannot be changed unless you re-create your control file
    Q You need to change the archivelog mode of an Oracle database.
    A. Execute the archive log list command
    Q When interpreting statistics from the v$sysstat, what factor do you need to keep in mind that can skew your statistics?
    A.
    Choice 3 The statistics gathered by v$sysstat include database startup activities and database activity that initially populates the database buffer cache and shared pool.
    Q You want to shut down the database, but you do not want client connections to lose any non-committed work. You also do not want to wait for every open session to disconnect.
    Choice 3 Shutdown transactional
    Q What step or steps do you take to enable Automatic Undo Management (AUM)?
    A.Choice 5 Add UNDO_MANAGEMENT=AUTO parameter to init.ora, create the UNDO tablespace, stop/start the database
    Q What Oracle 9i feature allows the database administrator to create tablespaces, datafiles, and log groups WITHOUT specifying physical filenames?
    A. Choice 4 Oracle Managed Files

  • Questions on: 1) Last Update; 2) Max String Length; 3) dynamic table keys

    Hi:
    - I am currently prototyping a plug-in but I have some questions:
    - 1) At the 'All Metrics' table, what is the intent of the 'Last Upload' field,
    and how is this field updated?
    - I have created some metrics but for some metrics the 'Last Upload' field
    has a timestamp but for other metrics there is no data.
    As far as I know, the metrics are similar but I do not know why the behaviour is different.
    - 2) Is there a maximum string length that the Oracle agent can accept?
    - I have a script which just returns all the environment variables into one cell.
    In emagent.trc, the Oracle agent issues the following warning:
    2009-08-27 12:38:47 Thread-76336 WARN upload: Truncating value of "STRING_VALUE" from "AGENT_HOME=
    - Is the truncation an issue?
    - 3) I have created some dynamic tables from performing an snmp walk, but the key that I use
    is the index of the SNMP table. The data is collected correctly, but I am not sure that using
    the table index as the key for OEM is a good idea because the metrics are stored according to the SNMP table index.
    Should the OEM key be the something like 'object name' instead of the index number from performing an SNMP walk ?
    For example, when I click a metric, the metric name is: Sensor Index 1.
    But should it not be something like 'Fan #1' ?
    Thanks John

    Metrics are collected at certain intervals, the Last Upload field indicates the date and time of the last time a metric collection was uploaded. The quicker the interval, the more recent that date should be. If you don't specify a CollectionItem in your default collection file for your metrics, they won't be collected or uploaded.
    If your metric column is a STRING type, it is stored in a VARCHAR2(4000)
    I'm not sure I understand your last problem... You have a table metric with a set of columns. The column you use as the key is just some tracking index which doesn't really mean anything. As long as your key column(s) make the row unique, the agent will be satisfied. If you want something more meaningful as your key, then it's something you will have to inject into your dataset if it isn't already there.

Maybe you are looking for

  • How many times can you change the region of the computer

    i need to know. is there a limit?

  • Can't send mail due to exceed limit?

    Hi, I am having some issues with my iPhone 3GS. Recently, whenever I try to send an email from my MobileMe account, I get the following error message: A copy has been placed in your Outbox. Sending the message failed because you exceeded your sending

  • Please tell me the bookmark is save which folder in computer ,thanks your reply!

    Because my old computer cannot work now,so I take the hard disc only,now I just can take back the dater .But I can't find the bookmark in which folder ,so can you tell me,where I can find the bookmark in harddisk thanks.

  • Muse CC name missing

    when I mouseover the "Mu" icon in my toolbar ( mac)  it just says "Adobe Muse"  all the other Ps, Dw, Id,  all say " product CC"   just wondering.    also muse seems to have done an upgrade,  while I now have two separate copies of all the others, on

  • Html5 Tags not in CS6

    It appears to me that none of the html5 tags are available in CS6.  I see them in the configuration folder (C:\Program Files (x86)\Adobe\Adobe Dreamweaver CS6\configuration\TagLibraries\HTML), but they are not showing up in the html tag library.  Am