Partition hotfix issue

I just opened up my new Yoga, and literally the first thing I tried to do was install the partition hotfix. After I typed "Y" to accept, I got the message "You have changed the default partition layout. The process was failed. Press any key to continue..." I definitely haven't changed anything. I can't find anyone else having this issue. Any ideas?

hi kimaire,
please, try running the hotfix with admin privileges. also, if your system didn't need that fix. it's normal that you could get an error message.

Similar Messages

  • Error 5 during partition hotfix for Yoga 13

    I am late to doing this, I know, but I am getting the "error 5" about 33 percent through the partition hotfix for the Lenovo Yoga 13. This is killing me because my C drive is out of space and I really need to make it bigger.

    there may be virus/anti-virus process/windows update or windows service so i'm not sure. can you try to re-install the windows by pressing one key recover button then run the hotfix? it will take just a few minutes to re-install the system if you can.

  • CUCM Cluster "Partition Unalligned" Issue Resolution by PCD (Prime Collaboration Deployment)

    Dear all,
    I will test soon (asap i got bootable 9.1.2 software) to solve "partition unalligned" issue with CUCM Cluster version 9.1.2.11900-12.
    In my opinion, we can solve this issue with PCD as migrate unalligned cluster to a new cluster with same version.
    Do anybody test it before?
    What is your opinions?
    Best Regards,
    Mesut 

    Hi Aman,
    As i found out, correct answer should be "migration to 9.x unsupported by CPD" for CUCM version 9.X.
    But just imagine, you have 8 servers in a cluster at version 10.X with Partition Unalligned issue. Is PCD not a good alternative to DRS to solve this issue while migrating to a new cluster with PCD?
    Best Regards,
    Mesut 

  • Drive Partition causing issues after new hard drive and recovery disks installed

    I have an HP Pavilion Media Center PC m7640n running Windows XP Media Center Edition.
    My hard drive failed and I had a computer guy install a 1 TB Hard drive as well as additional memory.  I purchased the recovery disks and he installed everything for me.  I have two issues:
    1.  After updating to XP Service Pack 3 and all other Windows recommended upgrades the computer deleted all my drivers and would not "plug and play" install anything.  I had the computer guy back to fix this--he did get all my drivers to work, but when I plug in anything new for the first time (such as my iPad), I have to go into the device manager and manually update the drivers.  The computer doesn't recognize it and automatically prompt me to install it.
    2.  The hard drive partition defaulted to my C drive being very small and my D drive being large.  Since software usually defaults to the C drive for installation, I quickly ran out of space.  I had the computer guy back and he programmed something so that everything now defaults to the D drive.  Unfortunately all my program files (including Microsoft Office) are still on my C drive including everything I put on my desktop.  My C drive is now full.
    Here is an example:  iTunes shows that it is defaulting to the D drive for anything I load into it.  Yet I tried to download the new operating system for my iPad and it said my hard drive was full.  When I did a file search to see what files had changed in the last week, it shows iTunes on my C drive (and my D drive).  Yet all the files that were "changed" due to my upgrade were defaulting to the C drive, even though in the setting in iTunes it shows it should be saving to the D drive.
    D:\Documents and Settings\My Music\iTunes\iTunes Media
    Is there an easy way to fix these issues?  I've already spent more than I anticipated when I chose to rebuild this computer vs buying a new one.  Now I'm regretting the decision and I'd like to try and fix this myself and not pay my computer guy to come back at $90/hour.
    Thanks!

    Here are the original specs of your computer. The specs state that the original hard drive was 320GB. You have since replaced the factory drive with a 1TB (1000GB) hard drive. The HP recovery routine seems to have issues with hard drives that deviate from the original size. The partition issues you are experiencing may be a result of this. As for the rest of your issues, it sounds like the SP3 install went bad.
    If I were you, I would consider reinstalling the OS using your recovery discs by doing a complete "destructive" reinstall/recovery. Afterward, I would post the size of both the c: and d: partitions... and go from there.
    Frank
    Frank
    {------------ Please click the "White Kudos" Thumbs Up to say THANKS for helping.
    Please click the "Accept As Solution" on my post, if my assistance has solved your issue. ------------V
    This is a user supported forum. I am a volunteer and I don't work for HP.
    HP 15t-j100 (on loan from HP)
    HP 13 Split x2 (on loan from HP)
    HP Slate8 Pro (on loan from HP)
    HP a1632x - Windows 7, 4GB RAM, AMD Radeon HD 6450
    HP p6130y - Windows 7, 8GB RAM, AMD Radeon HD 6450
    HP p6320y - Windows 7, 8GB RAM, NVIDIA GT 240
    HP p7-1026 - Windows 7, 6GB RAM, AMD Radeon HD 6450
    HP p6787c - Windows 7, 8GB RAM, NVIDIA GT 240

  • OS 10.2.8 partition/boot issue on PowerMac G3

    Computer: PowerMac G3
    OS: Boot Partition #1=10.2.8; Partition #2=9.2.2
    RAM: 384 MB
    USB PCI card installed
    Issue: Out of the blue my USB ports suddenly stopped working. After some research, I decided to reset my PRAM. This worked--sort of. My USB ports now work fine, but my machine will only boot up in OS 9.2.2. At first, my computer wouldn't even show my OS X partition. A quick check in the Profiler did show the existence of the OS X, but it showed as grayed out, unmounted, and with 0 bytes. In desperation, I ran DiskWarrior on the OS X partition from within OS 9 (it found a LOT of problems, but did claim to fix them). This did bring back my OS X partition to the desktop, Profiler, and Startup Disk chooser. I can now open and access all files on the OS X partition from within OS 9, but when I choose OS X as my startup disk and restart my computer, it just restarts in OS 9 again.
    Question: Does anyone out there have a suggestion on how I might get my computer to once again start up in OS 10.2.8?
    Additional Complication: The original internal CD ROM died a long time ago, and the quick replacement that I found does not allow me to boot from a CD. Can OS 10.2.8 be reinstalled from the OS 9 partition, or am I screwed on that front?
    Thanks in advance,
    darkbodhi

    Gulliver,
    Thanks for the info on PatchBurn--that looks extremely promising, however it appears that you cannot install it on the OS 9 partition which still leaves me in a bind. If/when I finally do get my OS X to boot, I will definitely install this software!
    Thanks again,
    darkbodhi

  • Interval Partition naming Issue (Oracle 11g R2 )

    I need help on identifying latest partition:
    I am using Interval Partition for my table,which creates partition every month end based on inserted data.
    When oracle creates partition assigning its own name but users have automated reports using partition names like Table_name_YYYYMM.
    I tested following ways to identify partition then renamed,worked fine but in long run do i get any problems ?
    or any other way to identify the latest partition ?
    1)Using max partition position :
    select partition_name from dba_tab_partitions a where partition_position = (select max(partition_position)
    from user_tab_partitions b where a.table_name=b.table_name
    and b.table_name = 'INTRVL_PARTITION');
    2)Using Sub Object max creation date :
    SELECT SUBOBJECT_NAME FROM dba_objects
    WHERE OBJECT_TYPE='TABLE PARTITION'
    AND OWNER='ABCD'
    AND OBJECT_NAME='INTRVL_PARTITION'
    AND CREATED=(SELECt MAX(CREATED) FROM DBA_OBJECTS
    WHERE OBJECT_TYPE='TABLE PARTITION'
    AND OWNER='ABCD'
    AND OBJECT_NAME='INTRVL_PARTITION');
    Thanks in advance .
    Edited by: user607128 on Mar 2, 2012 7:09 AM

    Your initial question said you were already using interval partitioning.
    >
    I am using Interval Partition for my table
    now users asking to convert Range to Interval Partition and keeping existing partitions same name.
    >
    So is your issue that you want to use interval partitining for a table now is now using range partitioning? Why are your users driving this change? That should be a decision made by the DBA and technical management since the difference is mainly on of management.
    Though there is one BIG difference that is data related. With interval partitioning if data (even erroneous data) is inserted for a partition that does not exist Oracle will create one. So if you have an insert statement that inserts 12 records for the year '2038' you will get 12 new partitions even though the data may be bogus.
    Now your management problem is detecting the problem, deleting the data (or fixing the date to move it to the right partition) and then dropping the partitions and storage.
    With 'interval' partitioning you had better be absolutely sure your data is clean in terms of the partitioning key values.
    That said, you can use table redefinition or just create a new interval partitioned table and do partition exchanges with the existing partitions to move the data.

  • 10gR2 Spatital + Partitioning Performance Issues

    I'm trying to get spatial working reasonably with a (range) partitioned table, containing a single 2D point value, all local indexes.
    If I query directly against a single partition, or have a restriction which "prunes" down to one partition, perfromance is reasonabe, and the plan looks like this;
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=3303 pr=0 pw=0 time=1598104 us)
    2596 PARTITION RANGE SINGLE PARTITION: 2 2 (cr=3303 pr=0 pw=0 time=1584119 us)
    2596 TABLE ACCESS BY LOCAL INDEX ROWID FOO PARTITION: 2 2 (cr=3303 pr=0 pw=0 time=1581494 us)
    2596 DOMAIN INDEX FOO_SDX (cr=707 pr=0 pw=0 time=1550312 us)
    If my query is a bit looser, and ends up hitting 2 or more partitions, things degrade substantially, and I end up with a plan like this:
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=10472 pr=0 pw=0 time=6592543 us)
    5188 PARTITION RANGE INLIST PARTITION: KEY(INLIST) KEY(INLIST) (cr=10472 pr=0 pw=0 time=3349053 us)
    5188 TABLE ACCESS BY LOCAL INDEX ROWID FOO PARTITION: KEY(INLIST) KEY(INLIST) (cr=10472 pr=0 pw=0 time=6586055 us)
    5188 BITMAP CONVERSION TO ROWIDS (cr=5955 pr=0 pw=0 time=6539205 us)
    2 BITMAP AND (cr=5955 pr=0 pw=0 time=6539145 us)
    2 BITMAP CONVERSION FROM ROWIDS (cr=514 pr=0 pw=0 time=209088 us)
    5188 SORT ORDER BY (cr=514 pr=0 pw=0 time=206661 us)
    5188 DOMAIN INDEX FOO_SDX (cr=514 pr=0 pw=0 time=158447 us)
    12 BITMAP OR (cr=5441 pr=0 pw=0 time=7052201 us)
    6 BITMAP CONVERSION FROM ROWIDS (cr=2650 pr=0 pw=0 time=3356960 us)
    1000000 SORT ORDER BY (cr=2650 pr=0 pw=0 time=3173026 us)
    1000000 INDEX RANGE SCAN PK_FOO_IDX PARTITION: KEY(INLIST) KEY(INLIST) (cr=2650 pr=0 pw=0 time=193 us)(object id 63668)
    6 BITMAP CONVERSION FROM ROWIDS (cr=2791 pr=0 pw=0 time=3292124 us)
    1000000 SORT ORDER BY (cr=2791 pr=0 pw=0 time=3153435 us)
    1000000 INDEX RANGE SCAN PK_FOO_IDX PARTITION: KEY(INLIST) KEY(INLIST) (cr=2791 pr=0 pw=0 time=1000160 us)(object id 63668)
    Now this is a simple test case. My real situation is a bit more complex, with more data, more partitions, and another table joined in to do the partition pruning, but it comes down to the same issues.
    I've tried various hints, but have not been able to change the plan substantially.
    I've written a similar test case with btree indexes and it does not have these problems, and actually does pretty good with simple MBR type queries.
    I'll post another message with the spatial test case script...
    --Peter                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Here is the test script (kind of long):
    --create a partitioned table with local spatial index...
    create table foo (
         pid number not null, --partition_id
         id number not null,
         location MDSYS.SDO_GEOMETRY null --needs to be null for CTAS to work
    PARTITION BY RANGE (pid) (
    PARTITION P0 VALUES LESS THAN (1)
    create index pk_foo_idx on foo(pid, id) local;
    alter table foo add constraint pk_foo
    primary key (pid, id)using index pk_foo_idx;
    INSERT INTO USER_SDO_GEOM_METADATA
    VALUES (
    'FOO',
    'LOCATION',
         mdsys.sdo_dim_array(
              mdsys.sdo_dim_element('Longitude', -180, 180, 50),
              mdsys.sdo_dim_element('Latitude', -90, 90, 50)
         8307);
    INSERT INTO USER_SDO_GEOM_METADATA
    VALUES (
    'FOO1',
    'LOCATION',
         mdsys.sdo_dim_array(
              mdsys.sdo_dim_element('Longitude', -180, 180, 50),
              mdsys.sdo_dim_element('Latitude', -90, 90, 50)
         8307);
    INSERT INTO USER_SDO_GEOM_METADATA
    VALUES (
    'FOO2',
    'LOCATION',
         mdsys.sdo_dim_array(
              mdsys.sdo_dim_element('Longitude', -180, 180, 50),
              mdsys.sdo_dim_element('Latitude', -90, 90, 50)
         8307);
    commit;
    --local spatial index on main partitioned table
    CREATE INDEX foo_sdx ON foo (location)
    INDEXTYPE IS MDSYS.SPATIAL_INDEX
         PARAMETERS ('layer_gtype=POINT') LOCAL;
    --staging tables for exchanging with partitions later
    create table foo1 as select * from foo where 1=2;
    create table foo2 as select * from foo where 1=2;
    declare
         v_lon number;
         v_lat number;
    begin
         for i in 1..1000000 loop
              v_lat := DBMS_RANDOM.value * 20;
              v_lon := DBMS_RANDOM.value * 20;
              insert into foo1 (pid, id, location) values
              (1, i, MDSYS.SDO_GEOMETRY(2001,8307,MDSYS.SDO_POINT_TYPE(v_lon,v_lat,null),NULL,NULL));
              insert into foo2 (pid, id, location) values
              (2, 1000000+i, MDSYS.SDO_GEOMETRY(2001,8307,MDSYS.SDO_POINT_TYPE(v_lon,v_lat,null),NULL,NULL));
         end loop;
    end;
    commit;
    --index everything the same way
    create index pk_foo_idx1 on foo1(pid, id);
    alter table foo1 add constraint pk_foo1
    primary key (pid, id)using index pk_foo_idx1;
    create index pk_foo_idx2 on foo2(pid, id);
    alter table foo2 add constraint pk_foo2
    primary key (pid, id)using index pk_foo_idx2;
    CREATE INDEX foo_sdx1 ON foo1 (location)
    INDEXTYPE IS MDSYS.SPATIAL_INDEX
         PARAMETERS ('layer_gtype=POINT');
    CREATE INDEX foo_sdx2 ON foo2 (location)
    INDEXTYPE IS MDSYS.SPATIAL_INDEX
         PARAMETERS ('layer_gtype=POINT');
    exec dbms_stats.gather_table_stats(user, 'FOO', cascade=>true);
    exec dbms_stats.gather_table_stats(user, 'FOO1', cascade=>true);
    exec dbms_stats.gather_table_stats(user, 'FOO2', cascade=>true);
    alter table foo add partition p1 values less than (2);
    alter table foo add partition p2 values less than (3);
    alter table foo exchange partition p1 with table foo1 including indexes;
    alter table foo exchange partition p2 with table foo2 including indexes;
    drop table foo1;
    drop table foo2;
    --ok, now lets run some queries
    set timing on
    alter session set events '10046 trace name context forever, level 12';
    --easy one, single partition  (trace ET=0.18s)
    select count(*) from (
         select d.pid, d.id
         from foo partition(p1) d
         where
         sdo_filter(d.location, SDO_geometry(
              2003,8307,NULL,
              SDO_elem_info_array(1,1003,3),
              SDO_ordinate_array(0.1,0.1, 1.1,1.1))
         ) = 'TRUE'
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=3303 pr=0 pw=0 time=1598104 us)
    2596 PARTITION RANGE SINGLE PARTITION: 2 2 (cr=3303 pr=0 pw=0 time=1584119 us)
    2596 TABLE ACCESS BY LOCAL INDEX ROWID FOO PARTITION: 2 2 (cr=3303 pr=0 pw=0 time=1581494 us)
    2596 DOMAIN INDEX FOO_SDX (cr=707 pr=0 pw=0 time=1550312 us)
    --partition pruning works for 1 partition (trace ET=0.18s),
    --uses pretty much the same plan as above
    select count(*) from (
         select d.pid, d.id
         from foo d
         where
         d.pid = 1 and
         sdo_filter(d.location, SDO_geometry(
              2003,8307,NULL,
              SDO_elem_info_array(1,1003,3),
              SDO_ordinate_array(0.1,0.1, 1.1,1.1))
         ) = 'TRUE'
    --heres where the trouble starts  (trace ET=6.59s)
    select count(*) from (
         select d.pid, d.id
         from foo d
         where
         d.pid in (1,2) and
         sdo_filter(d.location, SDO_geometry(
              2003,8307,NULL,
              SDO_elem_info_array(1,1003,3),
              SDO_ordinate_array(0.1,0.1, 1.1,1.1))
         ) = 'TRUE'
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=10472 pr=0 pw=0 time=6592543 us)
    5188 PARTITION RANGE INLIST PARTITION: KEY(INLIST) KEY(INLIST) (cr=10472 pr=0 pw=0 time=3349053 us)
    5188 TABLE ACCESS BY LOCAL INDEX ROWID FOO PARTITION: KEY(INLIST) KEY(INLIST) (cr=10472 pr=0 pw=0 time=6586055 us)
    5188 BITMAP CONVERSION TO ROWIDS (cr=5955 pr=0 pw=0 time=6539205 us)
    2 BITMAP AND (cr=5955 pr=0 pw=0 time=6539145 us)
    2 BITMAP CONVERSION FROM ROWIDS (cr=514 pr=0 pw=0 time=209088 us)
    5188 SORT ORDER BY (cr=514 pr=0 pw=0 time=206661 us)
    5188 DOMAIN INDEX FOO_SDX (cr=514 pr=0 pw=0 time=158447 us)
    12 BITMAP OR (cr=5441 pr=0 pw=0 time=7052201 us)
    6 BITMAP CONVERSION FROM ROWIDS (cr=2650 pr=0 pw=0 time=3356960 us)
    1000000 SORT ORDER BY (cr=2650 pr=0 pw=0 time=3173026 us)
    1000000 INDEX RANGE SCAN PK_FOO_IDX PARTITION: KEY(INLIST) KEY(INLIST) (cr=2650 pr=0 pw=0 time=193 us)(object id 63668)
    6 BITMAP CONVERSION FROM ROWIDS (cr=2791 pr=0 pw=0 time=3292124 us)
    1000000 SORT ORDER BY (cr=2791 pr=0 pw=0 time=3153435 us)
    1000000 INDEX RANGE SCAN PK_FOO_IDX PARTITION: KEY(INLIST) KEY(INLIST) (cr=2791 pr=0 pw=0 time=1000160 us)(object id 63668)
    --this performs better but is ugly and non-general (trace ET=0.35s)
    select count(*) from (
         select d.pid, d.id
         from foo d
         where
         d.pid = 1 and
         sdo_filter(d.location, SDO_geometry(
              2003,8307,NULL,
              SDO_elem_info_array(1,1003,3),
              SDO_ordinate_array(0.1,0.1, 1.1,1.1))
         ) = 'TRUE'
         UNION ALL
         select d.pid, d.id
         from foo d
         where
         d.pid = 2 and
         sdo_filter(d.location, SDO_geometry(
              2003,8307,NULL,
              SDO_elem_info_array(1,1003,3),
              SDO_ordinate_array(0.1,0.1, 1.1,1.1))
         ) = 'TRUE'
    );

  • Partitions process issues

     
    The "Ending Count" for the 7th feb is not matching with the "Starting Count" of 9th feb. The folowing could be reasons:
    After investigation I found that the propoagation on the 8th was interrupted and stopped while classifications were being calculated.  Becuase of this problem the FactContracts
    tables contains most of the data however the FactContractClassification does not contain any data for the 8th.  Which means that the cube for the 8th partition was not processed.
    Hi,
      as per my   investigation iam sending details desceitiption.. iam creating partition daily wise  with dynamic ssis packages. example.1-feb-14 and 2-feb-14,3feb_14. just like . 1-feb-14  ending count is 2-feb-14 staring count. actually
    the data was come before 9th feb . after  it happening 
    According to our excel Sheet As per Cmsed data source on 9th feb the starting data is 147058, but the data should be 147098.Due to which the rest of the data in other following partitions is getting affected. 
    As per  bcsas data source on 9th  feb the starting data is 186815, but the data should be 186785 which is exceeding by 30 records. Due to this reason the rest of the data in other following partitions is also getting affected.
    As per csdrs data source on 9th feb the starting data is 957751, but the data should be 956690 which is Exceeding by 1061 records. Due to this reason the rest of the data in other following
    partitions is getting affected.
     but iam creating dynamic partitions . who can it has happen . how to slove the this issue
    please  hleo me..
    how  to slove the this isssues

    Hi Sheshu,
    According to your description, you are experiencing the issue when processing one of the partitions in your SQL Server Analysis Services database, right?
    In your scenario, it's hard to give you the root reason that caused this issue with limited information. Here is a blog that describe data collection for troubleshooting Analysis Services issues, you can refer to the link below to get the detail error and
    make further analysis.
    http://blogs.msdn.com/b/as_emea/archive/2012/01/02/initial-data-collection-for-troubleshooting-analysis-services-issues.aspx
    If the issue persists, please provide us more detail information, so that we can make further analysis.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Semantic Partitioning Delta issue while load data from DSO to Cube -BW 7.3

    Hi All,
    We have created the semantic partion with the help of BADI to perform Partitions. Everthing looks good,
    first time i have loaded the data it is full update.
    Second time i init the data load. I pulled the delta  from DSO to Cube . DSO is standard where as Cube is partition with the help of semantic partition. What i can see is the records which are updated in Latest delta are shown into the report rest all are ignored by the system.
    I tried compression but still it did not worked.
    Can some has face this kind
    Thanks

    Yaseen & Soujanya,
    It is very hard to guess the problem with the amount of information you have provided.
    - What do you mean by cube is not being loaded? No records are extracted from DSO? Is the load completing with 0 records?
    - How is data loaded from DSO to InfoCube? Full? Are you first loading to PSA or not?
    - Is there data already in the InfoCube?
    - Is there change log data for DSO or did someone delete all the PSA data?
    Sincere there are so many reasons for the behavior you are witnessing, your best option is to approach your resident expert.
    Good luck.
    Sudhi Karkada
    <a href="http://main.nationalmssociety.org/site/TR/Bike/TXHBikeEvents?px=5888378&pg=personal&fr_id=10222">Biking for MS Relief</a>

  • Partition Tables - issue

    I have lots of partitioned tables in my schema , the requirement is to identify the current partition of the table.
    Let me explain this with an eg.
    Lets say I have a table T , with 24 partitions .. starting for Jan -04 to Dec-05. Now , my current partition is Dec-04. Partition w.r.t sysdate
    How can I achive this by querying the data dictionary

    120196, I think you are mistaken. There is no such thing as "current" partition. Oracle offers 3 types of partition, i.e. range partitioning, hash partitioning and composite partitioning. For all 3 types of partitions, you specify the column you want to partition on. Then, then when you insert/update/delete rows, Oracle does "partition elimination" (if the columns you partitioned is part of the query), which greatly improves performance. So, the partition you deal with (I guess this is what you mean by current partition) is the same one your row has to deal with. Hope this makes it clearer and gives you a 10,000 foot view. There is lots more to partitioning.
    -Raj Suchak
    [email protected]

  • Partition Exchange Issue

    Hello,
    When I am trying to do a partition exchange between one unpartitioned and partitioned table I am getting the following error
    ORA-08103: object no longer exists
    The indexes of the two table matches. I donot know what caused this problem. Can you please help me in resolving this??
    Thanks
    Sathish

    Hi,
    Check Metalink for this.
    There are a lot of bugs related to this error.
    Also check this blog.
    Best Regards,
    Alex

  • Mounting data partition + Finder issue

    Hello,
    I have now 2 disks in my MBP (SSD + HDD in Optibay). I want to use the HDD for data only storage and I would like to mount the disk in the folder
    ~/Data
    So I created the dir and mounted the partition manually to test it :
    sudo mount_hfs /dev/disk1s2 /Users/me/Data/
    It works but there is a little annoyance:
    I open Finder and go to my Home dir. Now I enter the Data directory where my partition is mounted. First, the sidebar does not show now that I'm in my user directory. Now if I want to step one level up using Cmd + UpArrow, I do not go back to my home folder as expected but to a list of all my disks.
    Is there a way to fix that ?

    I also tried with the "nobrowse" option but the behavior is the same.

  • RAW Partition setup issue:::HELP PLS

    HI
    Setup raw partition in following way
    crw-rw-rw- 1 oracle sysdba 162, 1 Mar 20 2002 raw1
    raw /dev/raw/raw1 /dev/rd/c0d1p3
    /dev/raw/raw1: bound to major 48, minor 11
    Now while trying to create datafile i hit the error
    Then i wanted to check in OS level so
    dd if=example.lst of=/dev/raw/raw1
    No such device or address
    0+1 records in
    0+0 records out
    All the notes and documents have been referred but nothing seem to be working.... can someone guide pls (Is it a bug with AS 2.1??)
    Thanks

    What is the data type in DB2 and Oracle for 'columnB'?

  • Vfat partition mount issue [Solved]

    I have two fat32 partitions (/dev/sda5 and /dev/sda6) that I cannot seem to mount automatically on bootup
    The error given is "mount point does not exist"
    However the directories that they mount into do exist - running "mount /dev/sda5" works fine once I am logged in
    This mounts them into the correct folders
    my fstab is:
    # /etc/fstab: static file system information
    # <file system>        <dir>         <type>    <options>          <dump> <pass>
    none                   /dev/pts      devpts    defaults            0      0
    none                   /dev/shm      tmpfs     defaults            0      0
    #/dev/cdrom             /media/cd   auto    ro,user,noauto,unhide   0      0
    #/dev/dvd               /media/dvd  auto    ro,user,noauto,unhide   0      0
    #/dev/fd0               /media/fl   auto    user,noauto             0      0
    #UUID=4708-CDB1 /mnt/documents vfat defaults 0 0
    #UUID=486B-CDCA /mnt/media vfat defaults 0 0
    /dev/sda5 /home/media vfat user,rw,umask=000 0 0
    /dev/sda6 /home/documents vfat user,rw,umask=000 0 0
    /dev/sda11 /usr ext4 defaults 0 1
    UUID=6adfcc0e-6b08-49aa-9e75-234f45695ca3 /home ext4 defaults 0 1
    UUID=6e604f97-451c-45ed-b761-75242e4df880 swap swap defaults 0 0
    UUID=b76b2dfe-a561-4e15-8209-cc3e56b2087e / ext4 defaults 0 1
    The three file system checks happen ok, but then the errors appear about the non-existant mount points
    As you can see from my fstab I have tried mounting them into /mnt/documents etc as well, which also returns the errors
    I would be prefer a nice clean fix but would be happy with a script that ran on login etc to run "mount /dev/sda5" but dont know how to do that or where to put it
    Thanks
    Richard
    Last edited by rgeo (2009-03-23 15:51:17)

    rgeo,
    probably /home is not yet mounted when the system trys to mount /dev/sda5.
    I suppose that mounting order in /etc/fstab matters, even though I am not sure.
    That would not explain why mounting in /mnt failed, if the directory documents existed.
    You can always put the line 'mount /dev/sda5' on /etc/rc.local that gets executed before login.
    Mektub

  • Best approach to do Range partitioning on Huge tables.

    Hi All,
    I am working on 11gR2 oracle 3node RAC database. below are the db details.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    Thanks,
    Hari

    >
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    >
    Sorry to tell you but that test and partitioning strategy is essentially useless and won't work for you entire table anyway. One reasone is that if you use the WITHOUT VALIDATION clause you must ensure that the data being exchanged actually belongs to the partition you are putting it in. If it doesn't you won't be able to reenable or rebuild any primary key or unique constraints that exist on the table.
    See Exchanging Partitions in the VLDB and Partitioning doc
    http://docs.oracle.com/cd/E18283_01/server.112/e16541/part_admin002.htm#i1107555
    >
    When you specify WITHOUT VALIDATION for the exchange partition operation, this is normally a fast operation because it involves only data dictionary updates. However, if the table or partitioned table involved in the exchange operation has a primary key or unique constraint enabled, then the exchange operation is performed as if WITH VALIDATION were specified to maintain the integrity of the constraints.
    If you specify WITHOUT VALIDATION, then you must ensure that the data to be exchanged belongs in the partition you exchange.
    >
    Comments below are limited to working with ONE table only.
    ISSUE #1 - ALL data will have to be moved regardless of the approach used. This should be obvious since your current data is all in one segment but each partition of a partitioned table requires its own segment. So the nut of partitioning is splitting the existing data into multiple segments almost as if you were splitting it up and inserting it into multiple tables, one table for each partition.
    ISSUE#2 - You likely cannot move that much data in the 2 to 3 hours window that you have available for down time even if all you had to do was copy the existing datafiles.
    ISSUE#3 - Even if you can avoid issue #2 you likely cannot rebuild ALL of the required indexes in whatever remains of the outage windows after moving the data itself.
    ISSUE#4 - Unless you have conducted full volume performance testing in another environment prior to doing this in production you are taking on a tremendous amount of risk.
    ISSUE#5 - Unless you have fully documented the current, actual execution plans for your most critical queries in your existing system you will have great difficulty overcoming issue #4 since you won't have the requisite plan baseline to know if the new partitioning and indexing strategies are giving you the equivalent, or better, performance.
    ISSUE#6 - Things can, and will, go wrong and cause delays no matter which approach you take.
    So assuming you plan to take care of issues #4 and #5 you will probably have three viable alternatives:
    1. use DBMS_REDEFINITION to do the partitioning on-line. See the Oracle docs and this example from oracle-base for more info.
    Redefining Tables Online - http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables007.htm
    Partitioning an Existing Table using DBMS_REDEFINITION
    http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php
    2. do the partitioning offline and hope that you don't exceed your outage window. Recover by continuing to use the existing table.
    3. do the partitioning offline but remove the oldest data to minimize the amount of data that has to be worked with.
    You should review all of the tables to see if you can remove older data from the current system. If you can you could use online redefinition that ignores older data. Then afterwards you can extract this old data from the old table for archiving.
    If the amount of old data is substantial you can extract the new data to a new partitioned table in parallel and not deal with the old data at all.

Maybe you are looking for