Modify the partition table

Hi,
Please let me know how to modify the partition table of the disk on to which root is mounted. I want to extend the root space, by modifying the partition table.
Is there any other way of extending the root disk space?
Thanks in Advance
- Sarat.

Take backup, re-partition, newfs, restore (you are talking about simple root, no mirroring/veritas, right?)

Similar Messages

  • Tag Values in the partition Table

    I get the following information when I use format to print the partition table:
    Part Tag Flag Cylinders Size Blocks
    0 root wm 0 - 25 129.19MB (26/0/0) 264576
    1 swap wu 26 - 51 129.19MB (26/0/0) 264576
    2 backup wu 0 - 14086 68.35GB (14087/0/0) 143349312
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 usr wm 52 - 14086 68.10GB (14035/0/0) 142820160
    7 unassigned wm 0 0 (0/0/0) 0
    I know the possible tag values are unassigned, boot, root, swap, usr, backup, var, home, reserved. However, what does each of these tag value mean? If I assign a tag value of var to a partition does that mean I have to use that partition as a /var?
    Thanks,
    Paul0al

    answer #1 - man -s 1m prtvtoc
    answer #2 - no, not at all... once a partition is created, so long as you're trying to use a wm (Read/write Mountable) as a mounted filesystem (once newfsing or mkfsing is done) and not trying to use a wu as a mounted filesystem, you can name the slices anything you want - it's pretty much arbitrary (just used for your own personal sanity)

  • How to update view  without modifying the base table ?

    Hi Experts , I need help in two qurstions
    1. How to update a view without modifying the base table ?
    2. How to write a file unix operating system in pl/sql ? is there any built in procedure is there ?
    Thank you

    Hi,
    I'm not sure what you're asking in either question. It would help if you gave a specific example of what you want to do.
    SowmyRaj wrote:
    Hi Experts , I need help in two qurstions
    1. How to update a view without modifying the base table ?You can't.
    Views don't contain any data; they just query base tables.
    You can change the definition of a view (CREATE OR REPLACE VIEW ...) so that it appears that the base table(s) have changed; that won't change the base tables.
    2. How to write a file unix operating system in pl/sql ? is there any built in procedure is there ?The package utl_file has routines for working with files.

  • How to modify the standard table

    plzzzzz answer my qestion.
    how to modify the standard table?
    in my knowedge we have the accese key is it correct or worng

    Hi
    <b>The system asks for access key only of two reasons:</b>
    1) You may be having problem of access rights. You would have to contact basis peopl.
    2) You may try to name an object not complying with the rules. You may have to check with the same.
    the process of getting access key is
    <b>the steps to get access key</b>
    you can also try via transaction OSS1
    In your Inbox, click on 'Registration', then on 'Register Objects', then you will have to choose your installation and give details about your object.
    The details you can get by going to your object and clicking on 'Change' - the pop-up screen which asks you for the access key gives you all the details you need to fill in on OSS1 to get your key.
    <b>or</b>
    U can get Access key from www.service.sap.com
    After getting into the site,select quicklinks, then click s to goto SSCR, in
    that select registration,
    after giving the proper details, u can get the access key
    <b>or</b>
    on sap support portal (sapnet)
    --> key & request
    ---> register SSCR key
    ---> registration
    ---> register developper
    and then choose your rigth installation number
    you can get the access key in this way
    <b>reward if usefull</b>

  • Regarding :Modifying the partition key column

    Hi,
    I have Create one table.At that time i did range partition on one column(callend).these table has 1000 millions records.Now I want to chage that partiion on another column (callstart) .How can i do this.
    Kindly provide me suggestion
    Regards.

    Much nicer type of reply:
    My best advice would be to just create a new table.
    Since your data is already partitioned, it is unreasonable at that time to try to "change" that partitioning column.
    Yes, you obviously have to, otherwise you wouldn't be asking I'm sure. Go ahead and just create a new partitioned table based on callstart and relax for a few hours :)
    You might want to even use two separate boxes within the same network to perform this task. So that one box is focused on selecting and sending and the other box is worried about partitioning the data to a new table. Access via DB link.

  • Modifying the custom tables TBZ* generated by EEWB for BUPA?? Advisable?

    Hi,
    We have added new fields in BUPA using EEWB. The new fields are automatically added in the new section created on screen. Now our requirement is to move these fields to one of the existing sections and delete the new section. Is it advisable?
    I assume that in the EEWB extension wizard, we do not have any option of generating the new fields in existing custom sections. Please correct me if I am wrong.
    As per our current plan, To accomplish this change we are planning to modify the tables TBZ3H, TBZ3I, TBZ3C and TBZ3D for the view--> section --> screen mapping. Will that be enough or some more changes will be required?
    Also, a new BDT application is created for the new section and there are corresponding entries in TBZ0A, TBZ0B, TBZ0C and TBZ1F. Now we are not sure that if we modify/delete these entries, how the system would behave. Any pointers are welcome!!
    Specially for TBZ1F entries, should the new events function modules created for the new application be assigned to the already existing application.
    Please advise.

    http://scn.sap.com/people/rakesh.chugh/blog/2009/12/28/deletion-of-eewb-extensionproject-in-sap-crm-for-bupa
    Just in case you are looking for this info

  • Resizing the partition table

    Hi Guys,
    I got 800G Volume from Equallogic on Solaris 10. It was created with newfs -T but now I have to expand it to 1.5 TB but somehow Solaris is not getting right
    LUN geometry size. Any comments/suggestions ?
    It is shrinking the size to 136.02 GB rather than expanding to 1.5TB
    Mode sense page(3) reports nsect value as 3000, adjusting it to -1096
    Please help me to solve this.
    Here are my steps
    Expand LUN size from 800G to 1.5 TB on Equallogic
    umount /san/users on Solaris
    bash-3.00# format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c0t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
    /pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/sd@0,0
    1. c0t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
    /pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/sd@1,0
    2. c1t6090A038105649B7A7A0F4324C81EE5Bd0 <EQLOGIC-100E-00-4.0 cyl 65533 alt 2 hd 16 sec 1600>
    /scsi_vhci/ssd@g6090a038105649b7a7a0f4324c81ee5b
    Specify disk (enter its number): 2
    selecting c1t6090A038105649B7A7A0F4324C81EE5Bd0
    [disk formatted]
    FORMAT MENU:
    disk - select a disk
    type - select (define) a disk type
    partition - select (define) a partition table
    current - describe the current disk
    format - format and analyze the disk
    repair - repair a defective sector
    label - write label to the disk
    analyze - surface analysis
    defect - defect list management
    backup - search for backup labels
    verify - read and display labels
    save - save new disk/partition definitions
    inquiry - show vendor, product and revision
    volname - set 8-character volume name
    !<cmd> - execute <cmd>, then return
    quit
    format> t
    Mode sense page(3) reports nsect value as 3000, adjusting it to -1096
    AVAILABLE DRIVE TYPES:
    0. Auto configure
    1. Quantum ProDrive 80S
    2. Quantum ProDrive 105S
    3. CDC Wren IV 94171-344
    4. SUN0104
    5. SUN0207
    6. SUN0327
    7. SUN0340
    8. SUN0424
    9. SUN0535
    10. SUN0669
    11. SUN1.0G
    12. SUN1.05
    13. SUN1.3G
    14. SUN2.1G
    15. SUN2.9G
    16. Zip 100
    17. Zip 250
    18. Peerless 10GB
    19. SUN146G
    20. EQLOGIC-100E-00-4.0
    21. other
    Specify disk type (enter its number)[20]: 0
    Mode sense page(3) reports nsect value as 3000, adjusting it to -1096
    c1t6090A038105649B7A7A0F4324C81EE5Bd0: configured with capacity of 136.02GB
    <EQLOGIC-100E-00-4.0 cyl 8608 alt 2 hd 16 sec 64440>
    selecting c1t6090A038105649B7A7A0F4324C81EE5Bd0
    [disk formatted]
    Thanks,
    Farhan

    I think Andreas is referring to fdisk partition 2, not slice 2, as his first image
    indicates.
    It sounds like Andreas was able to expand the size of fdisk partition 1 while
    booting from Live media without corrupting his root file system, if I'm reading
    this correctly. This probably worked because the new partition 1 started in
    the same location.
    In general, I wouldn't recommend this approach. Its like pulling the tablecloth
    and expecting the glass and silverware to remain in tact.
    However, I see some s11 enhancements that might make this more doable
    so I need to investigate.
    Alternative approaches are to attach a larger (spare) disk and detach the smaller
    disk.
    Thanks,
    Cindy

  • Including transport request while modifying the custom table

    Hello Experts,
    I have a scnerio like, I have to insert/modify some records in my custom table. Now we want that these records should come under a transport request, so that we can track which records are for which user.
    I am developing a report and including a parameter for transport request in the selection screen. So the report will take TR number directly from selection screen, but how to include it for the records inserted.
    Regards,
    Harjeet

    Hello Altaf,
    I think you did'nt understood my issue. I am not using and transaction. I am creating a report program which inserts new entries into my custom table based on the flat file on my presentation server.
    Now I want these entries to be inside a transport request which is there in the selection screen.
    Hope this gives the clear idea of my specification.
    Regards,
    Harjeet

  • Can I change some partitions of the partitioned table read-only?

    I have a table partitioned by range( partitioned hourly).
    I want to keep the history data online for query (the history data is rarely accessd). But in this way the table is too large. The performance may be a big problem. So some problems may be raised:
    How can I change the aged partition read-only?
    How to decrease the workload on this table and improve the performance?
    Oracle 10g (10.2.0.3)+Solaris 10

    How can I change the aged partition read-only?In 10.2.x.x I think only tablespaces can be made read only.
    In 11g you can place tables in read only mode, but I don't know about specific partitions.

  • How to choose the partition in oracle tables?

    Dear all,
    i m in need to create the partitions on prod.db tables since i m not aware of creating partitions?i just go through some theroy concepts and understood the range and list partitions (i.e)Range is normally used for values less than like jan,feb,march or values less than 50,000 values less than 1,00,000 like that each partition is having separate tablespaces to increase the performance. and for list is used to denoting the directions like west,east,north,south like that.
    Now what i want to know is ?
    1.)when will i can go ahead with partitions?
    2.)before creating partitions is it advise to create index or not needed?
    3.)if i started to create partition what is the leading column have to create partition and which partition has to choose?
    pls let me know and pardon me if i did any mistakes.
    thanks in advance..

    I had to research on same topic. One of my teammates suggested few points that might help you also.
    Advantages of partitioning:
    1) Partitioning enables data management operations such data loads, index creation and rebuilding, and backup/recovery at the partition level, rather than on the entire table. This results in significantly reduced times for these operations.
    2) Partitioning improves query performance. In some cases, the results of a query can be achieved by accessing a subset of partitions, rather than the entire table. Parallel queries/DML and Partition-wise Joins are also got benefited much.
    3) Partitioning increases the availability of mission-critical databases if critical tables and indexes are divided into partitions to reduce the maintenance windows, recovery times, and impact of failures. (Each partition can have separate physical attributes such as pctfree, pctused, and tablespaces.)
    Partitioning can be implemented without requiring any modifications to your applications. For example, you could convert a nonpartitioned table to a partitioned table without needing to modify any of the SELECT statements or DML statements which access that table. You do not need to rewrite your application code to take advantage of partitioning.
    Disadvantages of partitioning:-
    1) Advantages of partition nullified when you use bind variables.
    Additional administration tasks to manage partitions viz. If situation arises for rebuilding of index then rebuilding should to be done for each individual partition.
    2) Need more space to implement partitioning objects.
    3) More time for some tasks, such as create non-partitioning indexes, collection of “global" statistics (dbms_stat’s granularity parameter to be set to GLOBAL. if sub partition are used then we have to set it to ALL).
    4) Partition would implies a modification (of explain plan) for ALL the queries against the partitioned tables. So, if some queries use the choosing partition key and may greatly improve, some other queries not use the partition key and are dramatically bad impact by the partitioning.
    5) To get the full advantage of partitioning (partition pruning, partition-wise joins, and so on), you must use the Cost Based Optimizer (CBO). If you use the RBO, and a table in the query is partitioned, Oracle kicks in the CBO while optimizing it. But because the statistics are not present, the CBO makes up the statistics, and this could lead to severely expensive optimization plans and extremely poor performance.
    Message was edited by:
    Abou

  • Regarding the modify of internal table

    do 40 times varying lga from p0008-lga01 next p0008-lga02
    varying bet from p0008-bet01 next p0008-bet02.
    *data: bet01 type p decimals 2.
    if lga is initial.
    exit.
    endif.
    INDEX = SY-INDEX.
    amt1 = bet .
    *bet01 = 20 / 100.
    bet = ( bet * 50 ) / 100 .
    CONCATENATE ch INDEX INTO BETXX.
    assign betxx to <F2>.
    assign (betxx)  to <F1>.
    <F2> = bet.
    <u>modify p0008 index INDEX transporting F1</u>
    write:/ <F2>.
    enddo.
    *endif.
    ENDCASE.
    endform.
    can sombody tell me how to modify the p0008 table at the place vr im having the bet01 bet02 and so on fields.
    vn im using this modify statement im getting an error as
    Unable to interpret "INDEX". Possible causes of error: Incorrect

    Hi Madhvi,
    When you are posting a thread, please make sure others do not find it difficult to understand. Please don't use abbreviations like "vr" and "vn".
    If you need prompt responses, please ensure you describe your requirements in
    a proper manner.
    You are getting error because of using wrong syntax. the correct syntax is -
    MODIFY <itab> FROM <wa> [INDEX <idx>] [TRANSPORTING <f1> <f 2> ... ].
    The work area <wa> specified in the FROM addition replaces the existing line in <itab>. The work area must be convertible into the line type of the internal table.
    If you use the INDEX option, the contents of the work area overwrites the contents of the line with index <idx>. If the operation is successful, SY-SUBRC is set to 0. If the internal table contains fewer lines than <idx>, no line is changed and SY-SUBRC is set to 4.
    Without the INDEX addition, you can only use the above statement within a LOOP. In this case, you change the current loop line <idx> is implicitly set to SY-TABIX.
    When you change lines in sorted tables, remember that you must not change the contents of key fields, and that a runtime error occurs if you try to replace the contents of a key field with another value. However, you can assign the same value.
    The TRANSPORTING addition allows you to specify the fields that you want to change explicitly in a list.
    Regards
    Indrajit.

  • Can I use gpt to recreate/unerase a partition table? (Rebuild the GPT/GUID partition table?)  I don't want to do FILE recovery.

    (Yes, I've  googled a bunch and read threads like this one already.)
    Can I use gpt or some other app to recreate/unerase a partition table?  That is, how can I rebuild a disk's GPT/GUID partition table?)  I don't want to do FILE recovery.
    What happened: Instead of erasing a single partition off a disk with many partitions, the entire partition table was erased (using Disk Utility, w/o deleting the underlying files).  Somehow the "Erasing a disk deletes all data on all its partitions." warning message was missed.
    I have a copy of the output of df, with the number of blocks in each partition, from just prior to the erasure, so I should be able to recreate the GPT/GUID partition table.  Editing the GPT with a hex editor is not feasible.  Simply recreating the partitions with Disk Utility will overwrite the key filesystem tables on each partition, and I don't want to do that, plus Disk Utility doesn't allow me to specify exact partition sizes anyway.
    Surely there's an app for rebuilding the partition table (other than emacs' hexl-mode!) for recreating/unerasing a partition table when the partition sizes and orders are known?  I've looked at the advertising for a bunch of recovery software and none of them clearly indicate that they will do what I want. 
    I guess I can try using gpt on a copy of the reformatted drive I've made with dd, and see what happens.  But perhaps someone knows of a tool that should do what I need, or knows if gpt is that tool or not.
    There are answers and tools that will do FILE recovery - search for files and recover the ones that aren't fragmented or deleted.  As far as I can find, they just look for files on the disk, and don't pay much, if any attention to the filesystem info or directory heirarchy, which in this case is valuable.  Of course I could send it in to DriveSavers, or the like.  But none of that seems necessary, and the scavenging file recovery apps won't do the job well,
    E.g. some are mentioned here:
    I don't want to do FILE recovery.
    Thanks for any help.
    The links in this post are to pages describing the underlined term, e.g. the man pages for df and gpt.
    dd output includes:
    Filesystem
    512-blocks 
    Used Available Capacity  Mounted on

    Aperture has the ability to work with files in their existing location. They are called "referenced masters." When you import images, you should select the "In their current location" in the "Store Files:" drop down box. Have a read of the documentation for full specifics. Unsure how you can resolve your duplication; might be some work but next time have a read of the manual first
    Information for versions is stored in the Aperture database (library file). The masters can be inside the library file itself, or they can be somewhere else.

  • Datapump skipping partitioned tables in the database

    I have run expdp on Oracle 10.2.0.4.0 on AIX 5.6 Platform, the export runs well exporting rows in the database but when it comes to partitioned tables in the database it export no rows for all the partitioned tables. When I run a normal exp/imp the partitioned tables are exported with all their rows.
    I used the following commands:
    expdp system/****** dumpfile=export_data.dmp directory=DATA_PUMP_DIR full=y logfile=export_dump.log
    Output for expdp on partitioned table:
    . . exported "SCOTT"."DEPT":"DEPT_2003_P1" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P10" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P11" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P12" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P2" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P3" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P4" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P5" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P6" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P7" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P8" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P9" 0 KB 0 rows
    And for exp:
    exp system/****** file=export_dump.dmp full=y log=export_log1.log
    Result from the export log for partitioned tables:
    . . exporting partition DEPT_2005_P1 881080 rows exported
    . . exporting partition DEPT_2005_P2 1347780 rows exported
    . . exporting partition DEPT_2005_P3 2002962 rows exported
    . . exporting partition DEPT_2005_P4 2318227 rows exported
    . . exporting partition DEPT_2005_P5 3122371 rows exported
    . . exporting partition DEPT_2005_P6 3916020 rows exported
    . . exporting partition DEPT_2005_P7 4217100 rows exported
    . . exporting partition DEPT_2005_P8 4125915 rows exported
    . . exporting partition DEPT_2005_P9 1913970 rows exported
    . . exporting partition DEPT_2005_P10 1100156 rows exported
    . . exporting partition DEPT_2005_P11 786516 rows exported
    . . exporting partition DEPT_2005_P12 822976 rows exported
    I am not sure about this behavour from datapump, my database is more than 800GB and we want to migrate the database from AIX to LINUX.
    Thanks

    Sorry I just copied and pasted some extracts from my exp and expdp logs:
    For testing purposes I tried to run a datapump export of only 1 partitioned table in the database and its going through, but when I do the same on a full datapump export these partitioned tables are being exported with no rows.
    Export: Release 10.2.0.4.0 - 64bit Production on Tuesday, 02 August, 2011 12:18:47
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** dumpfile=DEPT.dmp tables=scott.dept logfile=dept1.log
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 48.50 GB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/COMMENT
    Processing object type TABLE_EXPORT/TABLE/RLS_POLICY
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/TRIGGER
    Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "SCOTT"."DEPT":"DEPT_2009_P6" 1.452 GB 7377736 rows
    . . exported "SCOTT"."DEPT":"DEPT_2009_P7" 1.363 GB 6935687 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P6" 1.304 GB 6656096 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P7" 1.410 GB 7300618 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P7" 1.296 GB 6641073 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P6" 1.328 GB 6863885 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P6" 1.158 GB 6568075 rows
    . . exported "SCOTT"."DEPT":"DEPT_2009_P5" 1.141 GB 5801822 rows
    . . exported "SCOTT"."DEPT":"DEPT_2011_P5" 1.162 GB 6027466 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P7" 1.100 GB 6214680 rows
    . . exported "SCOTT"."DEPT":"DEPT_2011_P6" 1.106 GB 5762303 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P5" 1.133 GB 5859492 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P5" 1.001 GB 5664315 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P5" 1.023 GB 5229356 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P8" 1.078 GB 5549666 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P8" 940.3 MB 5171379 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P8" 989.0 MB 4920276 rows
    . . exported "SCOTT"."DEPT":"DEPT_2009_P8" 918.6 MB 4553523 rows
    . . exported "SCOTT"."DEPT":"DEPT_2006_P6" 821.0 MB 5220879 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P4" 766.6 MB 3832262 rows
    . . exported "SCOTT"."DEPT":"DEPT_2006_P8" 747.9 MB 4753538 rows
    . . exported "SCOTT"."DEPT":"DEPT_2006_P7" 741.8 MB 4708242 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P4" 734.2 MB 3713567 rows
    . . exported "SCOTT"."DEPT":"DEPT_2005_P7" 661.4 MB 4217100 rows
    . . exported "SCOTT"."DEPT":"DEPT_2005_P8" 647.1 MB 4125915 rows
    . . exported "SCOTT"."DEPT":"DEPT_2011_P4" 677.8 MB 3428887 rows
    I also tried to run a normal schema by schema export with the normal exp system/password command the and got my dump file which is about 300GB, when I run the imp system/password command and specify fromuser=<system > and touser=<schemas_in_the_dumpfile> seperated by commas, it just comes up with this message:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export file created by EXPORT:V10.02.01 via conventional path
    import done in WE8ISO8859P9 character set and AL16UTF16 NCHAR character set
    Import terminated successfully without warnings.
    No tables are exported.
    If I specify the parameter imp system/password file=dept_export.dmp full=y log=dept_imp.log with the same dumpfile and it imports data from the dumpfile into my database.
    I am not sure what could be wrong with my dumpfile or my imp command and its parameters.

  • Modify the standard records in database table

    hi folks,
    could u say me , how to modify the standard records(values) in database table.
    For example in VBAP is one field like ZWERT(target value) , I wnat to modify the values of this field .
    THANKS
    KUMAR

    Hi,
    It's not advisable to modify the standard tables through program. If you want you can do using MODIFY.
    MODIFY database_table FROM TABLE itab.
    Thanks,
    Sri.

  • This disk doesn't use the GUID Partition table scheme.

    When I Install the Lion. It is stuck.
    Lion This disk doesn’t use the GUID Partition table scheme.Use disk Utility to change the partition scheme.Select the disk,choose the Partition tab,select the Volume Scheme and then click Options.
    The last step has problem.
    I can not click Options.

    In order to repartition the startup drive you will have to boot from your Snow Leopard DVD, select Utilities and then Disk Utility.  YOU MUST ERASE YOUR DISK TO CHANGE THE PARTITION TABLE.  The good news is that I see you have a Time Machine backup.  Make sure that your backup is current before you erase your startup disk.
    Even before doing this, save a copy of the Lion installer (it is in the Applications folder) onto an external device.  Your external hard drive is a good spot; it won't interfere with your Time Machine backup. This will prevent you from having to download the installer again.
    The easiest way to proceed after making sure your Time Machine backup is current, saving a copy of the Lion installer, and repartitioning your startup disk, is to reinstall Snow Leopard on your newly partitioned disk. This will take a little longer but it is simple and is fully supported by Apple.  Once that is done and you are running Snow Leopard on your startup disk again, run the Lion installer from whereever you saved it, and then restore your files and settings from your Time Machine backup during the install process.
    There is an unsupported procedure for making a bootable Lion DVD, but it is more complex and is not supported by Apple.  If you are uncomfortable with any of this and have access to an Apple Store, make an appointment at the Genius Bar and they can help you through the process.

Maybe you are looking for

  • Photoshop CS5 Mini Bridge Annoying Beep

    I have an annoying issue with the Mini Bridge in Photoshop CS5, and I believe anyone could reproduce it if you do the following: Open Photoshop CS5. Have the Mini Bridge palette collapsed as an icon, docked somewhere (you can simply use the "Photogra

  • Blue Ray Navigation Oddity

    I finished a DVD and verything works as anticipated. Then I composed a Blue Ray Disc with basically the same content just better (1080P) quality. The BD runs fine except for one odd thing: Apparently each menu item has an ID number. Then one can conf

  • RCOCB004 - meaning of lock settings

    Hi gurus, what might happen if the lock settings of RCOCB004 (send process messages, plant dependent) are set to "No lock, paralle jobs allowed". Do I have to expect some consequence such as inconsistencies ? background: we send the process msg. even

  • Process Scheduler start very slowly

    Hi, Fscm90, tools 8.50 on Win 2003 DB Oracle 10 g PS takes about one hour to start : from Tuxedo log 113150.MYSERVER!BBL.5624.2452.0: 10-29-2010: Tuxedo Version 10.3.0.0 with VS2005, 32-bit, Patch Level (none) 113150.MYSERVER!BBL.5624.2452.0: LIBTUX_

  • Software acess code

    I have miss placed my access code for my educational use copy of final cut pro 4. I have the code off the box and it doesn't work. I hope I'm not out a $1000 software because of my over sight. thanks in advance for any help