Oracle11g RAC with partitioning and cross instance parallel query problem

I have set up a 300gb TPC-H database using a 4 node RAC environment (8 cpu per node, 16 GB memory, 2 GHz processors) the system is served by 2.5 terabytes of SSD for its IO subsystem managed by a combination of ASM and OCFS2.
When I run a large parallel query (number 9 in the TPCH query set) I get:
ORA-00600: internal error code, arguments [kxfrGraDistNum3],[65535],[4]
with all other arguments blank. There were some reports of this in version 9, but it was supposedly fixed. Has any one seen this behavior or have a work around?
Mike

Good Idea! Why didn't I think of that? Oh yea...I did. Unfortunately TMS CSI is an update only partner type CSI so I cannot submit a SR. The 600 lookup was how I found the old stuff, but it didn't have any 11g references. I hoped maybe someone in the community had encountered this and had a workaround. By the way, the querey looks like so:
select
nation,
o_year,
sum(amount) as sum_profit
from
select
n_name as nation,
extract(year from o_orderdate) as o_year,
l_extendedprice * (1 - l_discount) - ps_supplycost * l_quantity as amount
from
h_part,
h_supplier,
h_lineitem,
h_partsupp,
h_order,
h_nation
where
s_suppkey = l_suppkey
and ps_suppkey = l_suppkey
and ps_partkey = l_partkey
and p_partkey = l_partkey
and o_orderkey = l_orderkey
and s_nationkey = n_nationkey
and p_name like '%spring%'
) as profit
group by
nation,
o_year
order by
nation,
o_year desc;
the other 21 queries, all using the same tables and degrees of paralell and cross instance settings, executed ok.
Mike

Similar Messages

  • RAC with Dbnode1 and Dbnode2

    I have RAC with Dbnode1 and Dbnode2 and my application submit job in Dbnode1 .job is running and Dbnode1 is down .
    It possible running Job automatically move on the Dbnode2 .
    1 App1 and App2 node /DBNode2 and DBNode1 node are running.
    2 Application batch submit successfully from Appnod1/DBnode1 and DBNode1 goes down in middle of the batch.
    3 Pending Job will not switch automatically on DBnode2.

    995587 wrote:
    I have RAC with Dbnode1 and Dbnode2 and my application submit job in Dbnode1 .job is running and Dbnode1 is down .
    It possible running Job automatically move on the Dbnode2 .
    1 App1 and App2 node /DBNode2 and DBNode1 node are running.
    2 Application batch submit successfully from Appnod1/DBnode1 and DBNode1 goes down in middle of the batch.
    3 Pending Job will not switch automatically on DBnode2.Yes... It complete possible, but your application need support Oracle RAC.
    Client Failover Best Practices for Highly Available Oracle Databases: Oracle Database 11g Release 2
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-11gr2-client-failover-173305.pdf
    Application Failover with Oracle Database 11g
    http://www.oracle.com/technetwork/database/app-failover-oracle-database-11g-173323.pdf
    How to develop it:
    Transparent Application Failover in OCI
    http://docs.oracle.com/cd/E14072_01/appdev.112/e10646.pdf

  • RAC with ASM and without ASM

    Hi all,
    we planing to install RAC 11g instance active/active . and we are using SAN storage RAID 10.
    I know ASM is nice feature . but it need more maintenance in future . This is what I see
    it from Manual and training . for patching ..... because it maintain as instance.
    why I do need ASM since I have SAN and I can control mirroring ...etc
    I need sold answer here ?? why I need to use this feature that already can be covered using another facility like SAN.
    Best Regards,

    What I have found in a RAC world is there is maintenance no matter which way you go, A cluster file system will require upgrades, patches, etc. RAW volumes will require extra effort in allocation, etc. as well as increase the number of files in the database. ASM requires additional instance on each node to maintain which is quite simple and rolling patches in ASM is becoming reality slowly. I have found that removing the management of RAW volumes is more trouble then the maintenance of the ASM instances and the added benefits of ASM outweigh the maintenance for sure. I found that the cluster file system mainteance is pretty well a wash.
    As for ASM being widely used, the most recent RAC clusters (last 3) I have built have all been ASM....... 1 on HPUX and 2 on Linux (Red Hat and Oracle Enterprise Linux) and future clusters coming up that I know of are all going to be ASM as well. While it may be true that a lot of existing RAC environments have not yet gone to ASM almost all new RAC environments are. It is certainly taking hold. If you look at the effort on a large database to move to ASM from RAW volumes or cluster file system it can appear to be a lot of work and that is true, but in the long run my experience with ASM has been positive therefore I would not hesitate to recommend new RAC clusters be built with ASM and existing clusters should have a migration plan in place. As with some cluster file systems like veritas, GPFS, etc. There is addtional cost involved where ASM does not have the additional cost so moving existing clusters can save $$........ RAM volumne management may not fall on the DBA but someone has to manage all those volumnes at a SAN level and that is additional management just may not really be with the DBA.
    Just my additional 2 cents worth.
    Hope this helps.

  • Problems with partitioning and install Grub. Fresh install

    All,
    First post here. I appreciate any help you can offer.
    I am having some problems when installing Arch Linux.
    I am installing Arch on a brand new (3 days old) Toshiba SatelliteC655D-S5300 Laptop.
    Hot sheet can be found at http://cdgenp01.csd.toshiba.com/content … -S5300.pdf.
    I was initially installing from 2011.08.19 x86_64 Core CD but someone suggested using the latest version.
    Now I am installing from 2011.11.13 x86_64 CD burned at 4x (the slowest my burner can go).
    I am able to complete all steps up to installing GRUB, but it fails to install.
    During partitioning I receive a few errors and I believe this is contributing to the issue.
    At first I tried automatic partitioning with 100mb boot, 1024mb swap, 10,000mb / and the rest of 320g for /home. Each partition is ext3 except /boot which is ext2.
    During the automatic partitioning an error briefly occured: /usr/lib/aif/core/libs/lib-blockdevices-filesystems.sh: line 355: !((partition_flag)): command not found.
    After speaking with a friend they suggested manually partitioning and using UUIDs instead.
    1) So far I have removed all partitions, rebooted.
    2) Partitioned using cFdisk. Bootable 100mb parition, 1024mb swap, 15,000mb primary (/), 3000mb logical (/var), and the rest 300949mb logical (/home).
    3) Once I write the changes and quit I reboot.
    4)I go back into the installer and complete steps 1-3.
    5) Go to step 4 and and then manually configure block devices, file systems, or mount points.
    6) I choose the option for uuid and hit ok.
    At this point 3 error messages appear at the bottom:
    /usr/lib/aif/core/libs/lib-ui-interactive.sh: line 602: local: 'part,' : not a valid identifier
    /usr/lib/aif/core/libs/lib-ui-interactive.sh: line 602: local: 'type,' : not a valid identifier
    /usr/lib/aif/core/libs/lib-ui-interactive.sh: line 602: local: 'label,' : not a valid identifier
    (Screenshot: http://i.imgur.com/OHRKo.jpg)
    7) Next it prompts me to add the mount points for each partition set.
    8) Select the partition, the mount point, it asks me for label and any additional opts for mkfs.ext3.
    9) I leave the label and opts field blank. After selecting ok to the opts field I get the same 3 errors as above:
    /usr/lib/aif/core/libs/lib-ui-interactive.sh: line 602: local: 'part,' : not a valid identifier
    /usr/lib/aif/core/libs/lib-ui-interactive.sh: line 602: local: 'type,' : not a valid identifier
    /usr/lib/aif/core/libs/lib-ui-interactive.sh: line 602: local: 'label,' : not a valid identifier
    (Screenshot: http://i.imgur.com/QqkSP.jpg)
    I am able to successfully set a mount point and format each partition. But I receive the same set of 3 errors occur for each partition.
    10) Once I complete the formatting I proceed to step 8, install bootloader.
    It says Generating Grub device map.. This could take a while. Please be patient.
    I receivieve the following error on this screen: /usr/lib/aif/core/libs/lib-blockdevices-filesystems.sh: line 355: !((partition_flag)): command not found.
    (Screenshot: http://i.imgur.com/B5j4K.jpg)
    11) After the error displays it goes to the next screen, before installing grub you must review config file. etc.
    12) I hit ok and then :q the config file. Is there a critical change in the config file that I'm missing?
    13) After closing the file I select which the boot device where the GRUB bootloader will be installed. My only option is /dev/sda. I hit ok
    Then I get the following 2 errors:
    /usr/lib/aif/core/libs/lib-blockdevices-filesystems.sh: line 355: !((partition_flag)): command not found
    /usr/lib/aif/core/libs/lib-blockdevices-filesystems.sh: line 355: !((partition_flag)): command not found
    (Screenshot: http://i.imgur.com/ol840.jpg)
    13) Error installing GRUB. See /dev/tty7 for output. Ok
    14) GRUB was NOT successfully installed. Ok
    I checked out TTY7.
    It shows the installer issuing the following commands in GRUB.
    1) device (hd0,) /dev/sda
         Error 12: Invalid device requested
    2) root (hd0,0)
         Filesystem type is extf2, partition type 0x83
    3) setup (hd0,)
    Checking if "/boot/grub/stage1" exists... no
    Checking if "/grub/stage1" exists... yes
    Checking if "/grub/stage2" exists... yes
    Checking if "/grub/e2fs_stage1_5" exists... yes
    Running "embed /grub/e2fs_stage1_5 (hd0,0)"... failed (this is not fatal)
    Running "embed /grub/e2fs_stage1_5 (hd0,0)"... failed (this is not fatal)
    Running "install /grub/stage1 (hd0,0) /grub/stage2 p /grub/menu.lst "... succeeded
    Done.
    4) quit
    I have tried rebooting from here and using the Arch CD to boot into the existing OS but it does not work.
    I tried grub-install /dev/sda
    I get Probing devices to check BIOS drives. This may take a long time.
    /dev/mapper../dm-0 does not have any corresponding BIOS drive.
    I have tried going into grub and issuing the same commands the install script did.
    Same errors.
    I'm afraid I don't have network access at the moment so I can't get a successful /arc/report-issues to run.
    I hope I've included enough information to start the troubleshooting.
    Let me know if I've missed anything!
    Thanks in advance,
    -Jason
    Last edited by username17 (2011-11-17 22:37:56)

    username17 wrote:I get Probing devices to check BIOS drives. This may take a long time.
    /dev/mapper../dm-0 does not have any corresponding BIOS drive.
    Your drive does not have an MBR to install grub to as it is a GPT disk - which is also not supported under the old GRUB.
    You need to create a small partition at the very beginning of the drive (8MB is plenty) and set the "bios_grub" flag. ie the "BIOS drive" your error refers to.
    You will then need to install the grub2-bios package following the chroot instructions on the grub2 wiki page here: https://wiki.archlinux.org/index.php/GRUB2#Installation
    ** Please note that I found the chroot mounts to be outdated - replace "/tmp/install" with "/mnt" **
    Your alternative solution is to boot a gparted liveCD and prepare your disk as MBR - this will (most likely) destroy all existing data on the disk.

  • ORA-1555 with Parallel query problem

    Hi
    We are getting many ora-1555 errors with parallel query stuff. Few queries are throwing with hint /* Q98989129 NO_EXPAND INDEX or /* Q908231094 INDEX_RRS. Our applicaiton is like DSS kind with bulk loading and big reports. I have doubled the undo retention and undo size to 200G and set the trace event. But most of the queries are parallel queries. We are not using the following query in applicaiton. seems like it is parallel sub queries. I need to know the exact mechanism how to devide the parallel queries with its degree of parallalism.
    SELECT /*+ Q277874009 INDEX_RRS(A1 "PK_TF_UTRAN_UCELL10_TAB") */ A1."TSTAMP" C0,A1."INSTANCE_ID" C1,A1."PMNOFOIRATHOM
    ULTIGSMFAILURE" C2,A1."PMNOFOIRATHOCS57GSMFAIL" C3,A1."PMNOFOSPEECHGSMFAILURE" C
    4,A1."PMNOFOHOSTANDGSMFAILURE" C5 FROM "FLEXPM"."ERC_TF_UTRAN_UCELL10_TAB" PX_G
    RANULE(0, BLOCK_RANGE, DYNAMIC) A1 WHERE A1."TSTAMP"(+)<=TO_DATE('2006-09-25 23:
    45:00', 'yyyy-mm-dd hh24:mi:ss')
    Thanks

    Hi,
    probably the error is due to wrong execution plans choosen by the optimizer.
    Are the table statistics up to date? Did You use parallel w/o and degree (ex. defining parallel on object using DEFAULT clause instead of a DEGREE)?
    Normally, parallel execution can be done in 2 ways, considering the driving object:
    1. like in previous versions, like 7.3, w/o partitioning, the object is split, by rowid, directly from the parallel coordinator to the query slaves (normally a number that can be equivalent to the degree of parallelism or the double in case of sort e/o joins).
    2. considering a partitioned object, for every partition/subpartition (phisical segment), a query slaves is made in charge.
    In both ways, the original session (SID) is the parallel coordinator that coordinates the salves executions.
    The hints and statements that You've reported are tipical queries used by slave process.
    In my experience, setting the PARALLEL degree to DEFAULT (no degree during CREATE or ALTER of the object after PARALLEL clause) will cause an "explosion" of slaves startup that can conduct to yr errors.
    Hope this helps
    Max

  • Problem with temp space allocation in parallel query

    Hello
    I've got a query which matches two large result sets (25m+ rows) against each other and does some basic filtering and aggregation. When I run this query in serial it takes about 30 mins and completes successfully. When I specify a parallel degree of 4 for each result set, it also completes successfully in about 20 minutes. However, when I specify that it should be run in parallel but don't specify a degree for each result set, it spawns 10 parallel servers and after a couple of minutes, bombs out from one of the parallel servers with:
    ORA-12801: error signaled in parallel query server P000
    ORA-01652: unable to extend temp segment by 64 in tablespace TEMPThis appears to be when it is about to perform a large hash join. The execution plan does not change whether the parallel degree is specified or not, and there is several GB of temp space available.
    I'm at a bit of a loss as to how to track down specifically what is causing this problem. I've looked at v$sesstat for all of the sessions involved and it hasn't really turned anything up. I've tried tracing the main session and that hasn't really turned up much either. From what I can tell, one of the sessions seems to try to allocate massive amounts of temp space that it just does not need, but I can figure out why.
    Any ideas of how to approach finding the cause of the problem?
    David

    Hello
    I've finally resolved this and the resolution was relatively simple - and was also the main thing that Mark Rittman said he did in his article: reduce the size of the hash join.
    After querying v$sql_workarea_active I could see what was happening which was that the sum of the temp space for all of the parallel slaves was exceeding the total amount of temp space available on the system. When run in serial, it was virtually at the limit. I guess the extra was just the overhead for each slave maintaining it's own hash table.
    I also made the mistake of misreading the exectuion plan - assuming that the data being pushed to the hash join was filtered to eliminate the data that was not of interest. Upon reflection, this was a rather stupid assumption on my part. Anyway, I used sub query factoring with the materialize hint to ensure that the hash join was only working on the data it should have been. This significantly reduced the size of the hash table and therefore the amount of temp space required.
    I did speak to oracle support and they suggested using pga_aggregate_target rather than the separate *area_size parameters.  I found that this had very little impact as the problem was related to the volume of data rather than whether it was being processed in memory or not.  That said, I did try upping the hash_area_size for the session with some initial success, but ultimately it didn't prove to be scalable.  We are however now using pga_aggregate_target in prod.
    So that's that. Problem sorted. And as the title of Mark Rittman's article suggests, I was trying to be too clever! :-)
    HTH
    David

  • IOS7 problems with WhatsApp and email linkage to contacts problems

    I am having problems in WhatsApp where in my chats the name of the person no longer is identified, only the phone number.  Thus, very difficult to know who's message it is.  Also, in email, the name being pulled as primary is the nickname field vs the actual name.  Is there a way to fix this?

    Check with Whatsapp support. If you have downloaded a new one and it is still acting up, there could be something wrong with it and their support should be able to answer your question. That is not an Apple application.

  • Parallel query problem

    problem:
    In this explain plan 3 indexes are used and I altered these indexes with no parallel degree 1. after that its again using the parallel query...
    How to disable the parallel query use for the mentioned above....
    Plz help me out....
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
    | 0 | SELECT STATEMENT | | 1 | 54 | 3 | | | | | |
    | 1 | PX COORDINATOR | | | | | | | | | |
    | 2 | PX SEND QC (RANDOM) | :TQ10001 | 1 | 54 | 3 | | | Q1,01 | P->S | QC (RAND) |
    | 3 | NESTED LOOPS | | 1 | 54 | 3 | | | Q1,01 | PCWP | |
    | 4 | NESTED LOOPS ANTI | | 1 | 42 | 3 | | | Q1,01 | PCWP | |
    | 5 | NESTED LOOPS | | 1 | 28 | 3 | | | Q1,01 | PCWP | |
    | 6 | BUFFER SORT | | | | | | | Q1,01 | PCWC | |
    | 7 | PX RECEIVE | | | | | | | Q1,01 | PCWP | |
    | 8 | PX SEND BROADCAST | :TQ10000 | | | | | | | S->P | BROADCAST |
    | 9 | TABLE ACCESS BY GLOBAL INDEX ROWID| LEA_AGREEMENT_DTL | 1 | 10 | 1 | 1 | 1 | | | |
    | 10 | INDEX UNIQUE SCAN | LEA_AGREEMENT_DTL_UQ | 1 | | 1 | | | | | |
    | 11 | PX BLOCK ITERATOR | | 1 | 18 | 2 | | | Q1,01 | PCWC | |
    | 12 | TABLE ACCESS FULL | PDC_DISBURSAL_TXN_D | 1 | 18 | 2 | | | Q1,01 | PCWP | |
    | 13 | INDEX RANGE SCAN | PDC_MULTIPLE_LOAN_TXN_DE | 1 | 14 | 1 | | | Q1,01 | PCWP | |
    | 14 | TABLE ACCESS BY INDEX ROWID | PDC_DISBURSAL_TXN_H | 1 | 12 | 1 | | | Q1,01 | PCWP | |
    | 15 | INDEX UNIQUE SCAN | PDC_DISBURSAL_TXN_H_PK_01 | 1 | | 1 | | | Q1,01 | PCWP | |
    -----------------------------------------------------------------------------------------------------------------------------------------------------_

    please try to use "noparallel hints" in your query.
    R.Wang
    http://www.oraclepoint.com

  • RAC with ASM and shared disks?

    Hi all,
    Can someone clarify this little point please. If I use ASM as my storage with a RAC database, I have to configure these nodes to shared disks. At least this is what the UG says ...
    When you create a disk group for a cluster or add new disks to an existing clustered disk group, you only need to prepare the underlying physical storage on shared disks. The shared disk requirement is the only substantial difference between using ASM in a RAC database compared to using it in a single-instance Oracle database. ASM automatically re-balances the storage load after you add or delete a disk or disk group.
    With my 9i databases, I used HCAMP to allow for concurrent VG access among the nodes. My questions are ...
    1) How can I share this storage as stated above without using HACMP? My understanding is with 10g I no longer have to use it.
    2) Can Oracle's clusterware be used to share storage? I have not seen any indication that it does.
    3) Does this mean I still have to use HCAMP with 10g crs to allow shared storage?
    Thank you

    "...meaning visible to all the participating nodes, which you don't need HACMP..."
    This is one step forward, but still not clear. On unix, storage is presented to ASM as raw volumes. As such, how can these volumes be visible on all nodes without using HCAMP (or whatever 3rd party clusterware you are using). Presenting raw volumes on several nodes is something that is not done at OS level without using some clusterware functionality.
    I do understand that storage or LUNs can be shared at the SAN fabric level. But then, these LUNs are carved in bug chunks and I would like to be able to allocate storage at much granular level using raw partitions.
    So all in all, here are my questions ...
    1) On unix platforms, can ASM disks be LUNs, raw volumes, or may be both?
    2) If raw volumes, how are these shared (or made visible) without using 3rd party clusterware? Having managed 9i RAC, it was the function of HACMP to make these volumes visible on all nodes, otherwise, we had to imp/exp VGs on all nodes to make them visible.
    Thank you

  • Problem with Oracle RAC with DRCP and persistent connections

    Hello,
    I am having a problem that has me stumped. I cannot get SCAN, and DRCP to work together nicely.
    Right now, I can connect through my scan hostname fine without connection pooling, once I enable it I start having connection time outs. I found if I specified outbound_connection_timeout in sqlnet.ora that it times the connection out to the time I set in there. If I don't set this option I get a "ORA-12170: TNS:Connect timeout occurred".
    The odd thing is I can leave connection pooling on and connect to one of my rac instances individually and it doesn't experience this problem. The only time it pops up is if I combine multiple rac instances (Through the use of SCAN or tnsnames.ora) and I enable connection pooling.
    I don't know which direction to go to even diagnose this, so any and all help is appreciated.
    Thank you.
    Edited by: Rarp on Oct 1, 2011 10:47 PM

    Hi,
    You need to detail the problem. Enable TRACE SQL*NET on the server and client using note below.
    How to Enable Oracle SQLNet Client , Server , Listener , Kerberos and External procedure Tracing from Net Manager [ID 395525.1]
    Also you can try check this note:
    11g: ORA-12170 With Combination RAC, DRCP and a Firewall [ID 953277.1]
    Hope this helps,
    Levi Pereira

  • Data not replicated with partition and dynamic calc

    HI
    I am using Essbase 11.1.2.1.
    I want to use a replicated pratition wetween two Essbase BSO applications.
    But in my source application, I have to send an account ACCOUNT (level 1) with several stored members and 1 dynamic calc member with formula. Due to this Dynamic calc account, the partition doesn't work (data are not replicated).
    How could I do?
    Thanks
    Fanny

    You're right, dynamic calc account are not issues. I am using them in partition.
    But py issue is there :
    The source dimension is like this :
    TAX (Dynamic Calc)
    - 6xxxxxxx
    - 6yyyyyyy
    - 6zzzzzzz (Dynamic Calc) = formula
    The target dimension is like this :
    TAX
    And so I need to send TAX (Source) to TAX (cible).
    I don't think it is possible, but I would like a confirmation before using something else.
    I hope it is clearer now.
    thanks
    Fanny

  • Help with partitions and file systems [SOLVED]

    Hi, i have been using Ubuntu for a while, and now i want to move to Arch. I've probed it in a PC and i like so i want to make que change.
    But, before installing Arch, I have 2 doubts. I red the beginners guide and also the instalation guide. There it says that if better to have diferents partitions for  /, /boot/, /home, /usr, /var, y /tmp
    Usually, i alwayes used something like this:
        * /boot (32megas)
        * /swap (512 megas)
        * /root (6 a 8 gigas)
        * /home (80 gigas aprox)
    It's really better to also have partitions for /var, /usr y /tmp? o some of them? and, in that case, wich size should i give them? because i don't want to make them too small, but i don't want to waste disk space neither.
    Adn that takes me to my second question, wich filesystem is better for each partitionn? in many places, i read taht JFS its good for /var or that XFS if better for /home and big files
    I thinked to use something like this:
        * /boot (ext2)
        * / (JFS)
        * swap
        * /home (XFS)
    Is a good design? or should i use other filesystem like reiserfs, etc... and for /var, /usr and /tmp partition, wich one should i use?
    Thank you
    Ps: This Pc is gonna be a desktop pc
    Ps2: sorry for my bad english. it is not my real lenguage
    Reason of edit: added the swap partition. I forgot it
    Last edited by Thalskarth (2008-12-21 20:11:00)

    thanks everybody for the help,
    kaola_linux wrote:@Thalskarth - it's better to have /var especially if you're using ext3 or other filesystems which was designed for larger files as your partition for /home and /.  Having a seperate partition for /var would be nice (backup purposes and reinstalling without downloading the entire package whole over again). 5gb would be sufficient for your /var, anyway you can always resize it to your needs.
    so, a 5gb partition in reiserfs would be OK for a /var
    Inxsible wrote:
    Lot of people also use XFS which is known to have better performance with huge files. I think EXT3 offers a good balance...because I am never sure whether my home partition will have all huge files or not..same with my external drive...so i just use EXT3
    If you have a specific partition for movies or some video/audio editing that you do..you may wanna consider XFS too. I don't do all that...so I have never used XFS. I wouldn't know the exact performance difference between ext3 and XFS.
    Yes, in many places i red that XFS is better for big files. But i couldn't fine wich is the meaning of "big file". Does it mean a 200 mg file? or a 4.4gb one??
    the same applies to reiserFs, what is a small file? a 1mg one or a 4kb one?
    I have alwayes used ext3, i thinked in XFS and JFS just to give them a try.
    amranu wrote:I have no idea about better filesystems, all my partitions are ext3 (soon to be ext4)
    Inxsible wrote:One thing that makes me wanna keep EXT3 is that EXT4 is coming out (soon?) and you can upgrade from 3 to 4 without having to reformat and having to make backups of your current data.
    really is cominf soon? i didn't think in ext4 beacuse many places said that was in development for many years... meybe they were a bit out-of-date
    Edit: i search in the wiki and it says that since 11 october 2008, ext4 is "stable" and is been included since kernel 2.6.28 as stable realase, is to early to prove it? or it better to wait a while??
    thanks.
    and, does anyone try the JFS one?
    Last edited by Thalskarth (2008-12-03 00:14:56)

  • Canvas display issue with stills and cross fades

    When viewing a collection of JPEG stills with cross fades, the images appear like they are "settling in" the canvas window (almost as if they expand a bit and then 'retreat back' to the space the window has). When removing the transitions I don't have the issue. Issue also appears when do more complex builds. No difference whether stills are exact 720x480 or much bigger. I rendered a portion of my sequence out to QuickTime and via Compressor and it shows normal - so my only issue is that my preview before render is not reliable (for the stills portion). Also tried to swith off external video options etc. and changing size of canvas window with no result. Suggestions? (using FCP 5.0.4 and latest QuickTime)

    I rendered a portion of my
    sequence out to QuickTime and via Compressor and it
    shows normal - so my only issue is that my preview
    before render is not reliable
    The preview of unrendered material is never reliable and isn't meant to be. It just gives you a basic idea what's there. Render the transitions and you should see them as they are meant to be seen.

  • Production RAC with ASM and DR with non-rac and ASM

    Hi,
    I have a question whether this configuration is feasible or not. Production environment is 10gR2 RAC running on 2 node cluster. DR site will have single instance with ASM. Disks from primary site will be mirrored to DR site using EMC SRDF. Would i be able to bring up single instance at DR site on mirrored ASM disks once split is done? Currently not incorporating Dataguard for DR solution.
    Wanted to check if anyone has done like this before.
    Thanks

    This is feasible. SRDF is capable of providing data at the DR site that is always in a "crash-consistent state", without doing anything at the production site RAC environment. Starting the database at the DR site will cause it to run crash recovery first. SRDF synchronous or asynchronous feature should be used in this case.
    At the DR site, your default OS devices files could be different. Either change the asm disk strings or configure the device files for the disks such that they are the same as the shared device files at production side. This way, you can use the ASM configuration as is.
    Also the database is to be started in a non-cluster mode, by commenting the cluster related parameters (ie: cluster) at the init*.ora parameter file.

  • CTAS with partitions and constraint statement, Is this possible?  Help!

    I'm trying to use CTAS to create a copy of one of our large tables to test the use of local indexs and partition pruning. Can anyone help me out here? Is this doable or should I go another route? I'm also rearranging the table to put the first column of my primary key as the first column of the table.
    Create table new_table,
    constraint pk_new_table primary key (seq_number,ssn,service_code)
    using index (create index PK_LI_new_table ON new_table (seq_number,ssn,service_code) LOCAL tablespace TS_LI_new_table)
    tablespace TS_NEW_TABLE
    PARTITION BY RANGE (SEQ_NUMBER)
    partition P197203 values less than (2) tablespace ts_new_table_197203,
    partition P200906 values less than (245) tablespace ts_new_table_200906
    AS SELECT (<new order of columns>) from <original_table>
    parallel enable row movement;
    output from statement:
    Create table new_table,
    Error at line 1:
    ORA-00922: missing or invalid option
    The asterisk is below the comma in the above error statement. Would I need to list out the new table columns and datatypes in the create table statement? Any help appreciated.

    Hello,
    CTAS will not replicate the structure of the source table.
    Your best option may be to:
    1. Create the empty table that you'll be populating, perhaps using the DBMS_METADATA package:
    set pagesize 5000
    set long 100000
    SELECT DBMS_METADATA.GET_DDL('TABLE','<your_table>','<owner>') FROM DUAL;2. Then INSERT direct-path into that table:
    INSERT /*+ APPEND */ INTO new_table (co1, col2, col3,... coln)
    SELECT col1, col2, col3,...
      FROM source_table;
    wolfeet wrote:Would I need to list out the new table columns and datatypes in the create table statement? And you can specify the order of the columns in the INSERT statement above.

Maybe you are looking for

  • Viewing movies in Aperture

    I have just upgraded to Aperture, as the Apple Store suggested that with the amount of photos we take & manage, it would be a worthwhile upgrade. However, the videos we could view in iPhoto are now stated as an "Unsupported Format" in Aperture. Is th

  • IPhone won't backup in iTunes (Apple Support says it's Windows related and won't help)

    My iTunes freezes constantly and mine and my husbands iPhone's have stopped backing up in iTunes. I get an error when it does go through the process that says "iTunes could not back up the iPhone " 's iPhone" because the backup could not be saved on

  • Via Code : Creating Org. Unit and Position and assignment of User ID to Position

    Hello, I am looking for code which will create Org. Unit and creates Position in it. Also the User IDs should be assigned to the Position via code. I do this via PPOC but would like to automate this process. Will appreciate your kind help. Thank you!

  • WebDMS-Unable to view document status

    Hi, while creating new document in webdms no entries are visible under document status. while changing the document in webdms but created in DMS all the values in document status are visible. could anyone tell whether any setting is missing? Thank Yo

  • No Such Operation failure

    I have a simply BPEL process, which just does an Invoke to a Partnerlink, which calls a web service on a remote server. On execution I receive the following failure : <remoteFault> <part name="code" > <code>Client</code> </part> <part name="summary"