Partition Question: Archiving Data

I'm purging and archiving data from a table based on date range. During purge process I'm merging partition function boundary points and then removing partition file.
I was wondering what will going to happen if user wants to enter the data which falls under boundary range that I already purged.
For example: I purged data older than 2010 and remove partition file "Inventory_201006" after merging partition function range. After that business user wants to enter data that is for 2008-06-01. Wondering where It would go.
PS: I tried entering older data but got an error:
"The filegroup "Inventory_200806" has no files assigned to it. Tables, indexes, text columns, ntext columns, and image columns cannot be populated on this filegroup until a file is added."
Do I need to mention default partition file group in partition scheme or somewhere else before removing file , so any data older than 2010 should be inserted in that default file group?
Thanks!
ZK

Partition always ranges from negative infinity to positive infinity. You apply boundary points through partition function which are applied to table through partition scheme which are applied on file groups.
so, even if you merge the boundary points,  the overall range will not change but the data will move between partitions. Also, instead of dropping the file and file group you could potentially use that as the next file group as you merge the partitions.
 try this example:
--create database with all filegroups needed
CREATE DATABASE [myPartition]
CONTAINMENT = NONE
ON PRIMARY
( NAME = N'myPartition', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012\MSSQL\DATA\myPartition.mdf' , SIZE = 4096KB , FILEGROWTH = 1024KB ),
FILEGROUP [FG1]
( NAME = N'FG1', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012\MSSQL\DATA\FG1.ndf' , SIZE = 4096KB , FILEGROWTH = 1024KB ),
FILEGROUP [FG2]
( NAME = N'FG2', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012\MSSQL\DATA\FG2.ndf' , SIZE = 4096KB , FILEGROWTH = 1024KB ),
FILEGROUP [FG3]
( NAME = N'FG3', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012\MSSQL\DATA\FG3.ndf' , SIZE = 4096KB , FILEGROWTH = 1024KB ),
FILEGROUP [FG4]
( NAME = N'FG4', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012\MSSQL\DATA\FG4.ndf' , SIZE = 4096KB , FILEGROWTH = 1024KB )
LOG ON
( NAME = N'myPartition_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012\MSSQL\DATA\myPartition_log.ldf' , SIZE = 4096KB , FILEGROWTH = 10%)
GO
USE [myPartition]
GO
IF NOT EXISTS (SELECT name FROM sys.filegroups WHERE is_default=1 AND name = N'PRIMARY')
ALTER DATABASE [myPartition] MODIFY FILEGROUP [PRIMARY] DEFAULT
GO
-- create partition function
Create Partition Function pf_Date(date)
as Range RIGHT For Values ('01/01/2010','01/01/2011','01/01/2012')
--create partition scheme
Create Partition Scheme ps_Date
as partition pf_Date
to (FG1,FG2,FG3,FG4)
/*--at this point, your boundary points are :
-infinity to 12/31/2009 - FG1
01/01/2010 to 12/31/2010 - FG2
01/01/2011 to 12/31/2011 - FG3
01/01/2012 to infinity - FG4 */
--create table with this partition
Create Table mytable(sno int identity(1,1),sname varchar(20),sdate date constraint PK_mytable Primary Key clustered (sdate)) on Ps_Date(sDate)
--insert some data
declare @a date
set @a='01/01/2009'
while (@a<'01/01/2014')
begin
insert into mytable(sname,sdate)
select cast(year(@a) as varchar(4))+'part',@a
set @a=dateadd(day,1,@a)
end
--check the row counts in each partition
select row_count,* from sys.dm_db_partition_stats where object_id=object_id('mytable')
select A.name [PartitionScheme],B.name as [PartitionFunction],C.boundary_id,C.value
from sys.partition_schemes A
left outer join sys.partition_functions B on A.function_id=b.function_id
left outer join sys.partition_range_values C on C.function_id=B.function_id
--merge the partition
ALTER PARTITION FUNCTION pf_Date ()
MERGE RANGE ('01/01/2010')
---- or otherwise,you could reuse the files/filegroup
ALTER PARTITION SCHEME ps_Date
NEXT USED FG2
ALTER PARTITION FUNCTION pf_Date ()
split RANGE ('01/01/2013')
/* at this point the boundary points are
-infinity to 12/31/2010 - FG1
01/01/2011 to 12/31/2011 - FG3
01/01/2012 to 12/31/2012 - FG4
01/01/2013 to infinity - FG2*/
--check the row counts in each partition
select row_count,* from sys.dm_db_partition_stats where object_id=object_id('mytable')
i assume, you understand that infinty is not really infinity but the minimum/maximum possible value for the data type.
Hope it Helps!!

Similar Messages

  • Data Warehouse Partitioning question

    Hi All,
    I have a data warehousing partitioning question - I am defining partitions on a fact table in OWB and have range partitioning on a contract number field. Because I am on 10gR2 still, I have to put the contract number field into the fact table from its dimension in order to partition on it.
    The tables look like
    Contract_Dim (dimension_key, contract_no, ...)
    Contract_Fact(Contract_Dim, measure1,measure2, contract_no)
    So my question:
    When querying via reporting tools, my users are specifying contract_no conditions on the dimension object and joining into the contract_fact via the dimension_key->Contract_dim fields.
    I am assuming that the queries will not use partition pruning unless I put the contract_fact.contract_no into the query somehow. Is this true?
    If so, how can I 'hide' that additional step from my end-users? I want them to specify contract numbers on the dimension and have the query optimizer be smart enough to use partition pruning when running the query.
    I hope this makes sense.
    Thanks,
    Mike

    I am about to start a partitioning program on my dimension / fact tables and was hoping to see some responses to this thread.
    I suggest that you partition the tables on the dimension key, not any attribute. You could partition both fact and dimension tables by the same rule. Hash partitions seem to make sense here, as opposed to range or list partitions.
    tck

  • Archiving data in ods

    Hello out there
    In our company we ara going to archive transactional data for the first time
    I am going to create several ods:es . In those i am going to transfer data before archiving in r3.
    Now my question.  I will set a flag somewhere and if this is set i will be impossible to delete the data in the ods via rsa1 transaction
    I hope this is possible but dont know how to do.
    /7 Stisse

    Hi dear and welcome on board!
    Is your doubt related to some possible authorization settings in order to avoid potential (undesired) deletions of this data ?
    I think you can move these ODSs in a dedicated Infoarea and then avoid to assign access into it to anyone...
    Otherwise, you can archive data in BW too !
    Let me know your specific requirements (do you have to do some reporting on this archived data ?)...
    Bye,
    Roberto

  • How to record partitioned tables in Data Modeler ?

    Hi,
    I have discovered the options for partitioned tables. Some questions remain, however.
    1) Static partitions
    For one table, a set of predefined partitions is set up in the data model. In certain situations, a new partition is added. Is it possible to generate alter table DDL from the model that will add this partition ?
    2) Dynamic partitions
    For another table, partitions are added and deleted on the fly (exhange partition). For this tabel, I would like to record just 1 partition, which is added when the tabels are created initially. It is not important to include the actual partitions when synchronizing with the database. Is it possible to skip this for a certain table (i.e. record this preference with the table, since it does not apply for all partitioned tables... ) ?
    Thanks,
    Richard.

    Hi Richard,
    For one table, a set of predefined partitions is set up in the data model. In certain situations, a new partition is added. Is it possible to generate alter table DDL from the model that will add this partition ?
    In Data Modeler version 4.1, ALTER TABLE ... ADD PARTITION statements are generated for new LIST partitions, but not for other types of partitions.
    For another table, partitions are added and deleted on the fly (exhange partition). For this tabel, I would like to record just 1 partition, which is added when the tabels are created initially. It is not important to include the actual partitions when synchronizing with the database. Is it possible to skip this for a certain table (i.e. record this preference with the table, since it does not apply for all partitioned tables... ) ?
    I've logged an enhancement request on this.
    Regards,
    David

  • LPAR - LOGICAL PARTITION QUESTION -

    Hello SDN Experts.
    LPAR (LOGICAL PARTITION QUESTION)
    Our current Production Environment is running in Distributed Installation on
    IBM System P5 570 Servers AIX ver 5.2, each node is running two Applications: SAP ERP 2005  SR1 (ABAP + JAVA)  and CSS. (Customer Service System)
    Node One
    u2022     SAP Application (Central Instance, Central Services)
    u2022     Oracle 9i Instance for CSS Application.
    Node Two.
    u2022     Oracle 10G Instance for SAP Application
    u2022     CSS Application.
    To improve performance we are planning to create a new LPAR for SAP.
    According to the IBM HW Partner LPAR is logically isolated with different HW/SW resource(CPU/Memory /Disk resource, IP/hostname/mount point)...
    Question:
    I have this two possible solutions to copy SAP instances (app + db)  to new LPAR, can I apply SCENARIO 2, which in my opinion is easier than SCENARIO 1.
    SCENARIO 1.
    In order to migrate application and database instances to the new LPAR do I need to follow the procedure explained in the guide:
    (*) System Copy for SAP Systems Based on SAP NetWeaver 2004s SR1 ABAP+Java Document version: 1.1 ‒ 08/18/2006
    SCENARIO 2.
    After create all file systems (required in AIX) to copy data from Applications and Database Instances to their respective LPARs and change the ip address and hostnames in parameter files according to the following SAP Notes:
    Note 8307 - Changing host name on R3 host
    Note 403708 - Changing an IP address
    Which is the best scenario SAP recommends in this case ?
    Thanks for your comments.

    If your system is a combined ABAP + Java instance you can´t manually change the hostname. It´s not only those places that are listed in that note but much more, partially on filesystems in .properties files, partially in the database.
    Doing that manually may work but since the process is not documented anywhere and since it depends on the applications running on top of the J2EE instance it´s not supported.
    For ABAP + Java instances you must use the "sapinst-way" to get support in case of problems.
    See note 757692 - Changing the hostname for J2EE Engine 6.40/7.0 installation
    Markus

  • How to archive data which is in sap content server

    Hi Experts,
    We are facing issue with data stored in content server. With the time the data which is in content server is increasing and reaching the size limitation of content server. We want to archive data which is already present in content server.
    Is there any stategy for this and how we can archive the data which is already inside sap content server.
    Thanks.
    Regards
    Richa Koolwal

    Thanks Ajay,
    It really helps to ansewer some of my questions.
    I would like to ask some more information for content server to you. As you said content server will allow to increase the database size upto 64TB and advise for the either of the 2 activiteis.
    1. If I shift the data which is old to some other storage, should that storage device be connected to content server.
        e.g If I have decided to store data on some file system I have to connect file system with the content server with the specific driver for file system.
    2. Next you said to install new content server, but as I know content server provide us the facility to connect to various instances of database system if one of the instance reaches its limit of 64 TB, so would it be good to install some other instances of database or the new sap conent server.
    3. Which storage media (File system and Instance of Database) is prefferable in terms of cost and security to enhance the scalability of content server. 
    Thanks.
    Regards
    Richa Koolwal

  • Unable to view the AUSP archive data

    Hi,
    I have archived the data for FI_ACCRECV archive object. I do not see AUSP table in the write statistics.
    I am not sure how to view the archived data of AUSP table.
    Kindly guide me. If this is not the correct forum to ask..please let me know where can i post this question.
    Thanks
    Mallika

    Hi
    In tcode DB15 you can see the archiving object for the table AUSP. In tcode AOBJ for object FI_ACCRECV, I can see that there isn't any report to read this data (if the archiving file is in its path). I'm sorry.
    Regards
    Eduardo

  • Archive data

    Hi,
    I am new to the Oracle database. And I need to archive data that is older than 60 days and delete the data from the tables. The tables have refertential integrity constraints. The archive data need to be resored back to the tables when necessary. I have read the previous messages posted by others and got to know that there are at least three ways to do it.
    Firstly, I can use Oracle Partitioning. But this is only available in Enterprise edition. The version I will be using is Standard edition one. So I can't use the partitioning method.
    Secondly, I can use Oracle Export utility with query option to export the data and delete the data from the tables after export.
    Thirdly, I can create the same set of tables (historical tables) and write scripts to copy the data from the current tables to the historical tables, and delete the data in the current tables.
    In my opinion, I think that the second method is move like an "archiving" as the Export will export the data into a file which can be stored in some sorts of storage devices.
    The third method simply sotres the historical data in the database. I still need to back up those historical tables in case the database crashes.
    If you know any other methods or any improvements to the above methods, please let me know. Thanks.

    you've got all options covered.
    the devil is in the details ..... knowing the schema well enough to be able to
    write the correct queries for export or to copy to archive tables.
    Note : Those archive tables can be moved to another database which you
    can make available to users if they need to query it for "historical" data,
    provided all parent-child / master-detail data relationships are maintained.

  • How Transformations or Routines will work for the NLS Archived data in BW on HANA?

    Hi,
    I have archived data from BW on HANA system into NLS IQ.
    Now I'm creating a Transformation or Routines, in this case how the archived data will be read without writing any ABAP Code.
    Thanks & Regards,
    Ramana SBLSV.

    Hi Ramana,
    May be i will try to explain with 2 cases. hopefully one of this will be your case:-
    DSO1 -> you have archived this DSO.
    Case 1:-
    Now you want to load from DSO 1 -> DSO2 (direct transformation)
    so here , you question is while loading from DSO1 to DSO 2 ,will  the archived data also  be loaded to DSO 2?
    if so, there is a  infoprovider property you need to change for this
      In extra-> infoprovider properties -> change-> Near line usage switched on (by default it is off).
    so, the archived data also will be used in this case.
    Case 2:-
    you are loading from DSO 3 to DSO2. lookup on DSO1.
    so in lookup DSO1, you need to use archived data as well?
    In this case, you have to use the Read from DSO rule type. this will access from both active table and NLS data.
    Let me know if this is not the case with you?
    Regards,
    Sakthi

  • HD partition questions...

    Hi all,
    i have questions about the partitioning of the HD. Why one shoud partition the HD? it's really important? and (if yes) how and what is the best option to do that?
    thanks!

    One of the reasons very early mac users partition
    their 1st or 2nd internal hard disk was because at
    that time, the OS was unable to read a disk, say (for
    e.g.) larger than 160Gb, that means if a 160Gb disk
    was installed, the mac reads it only as 80Gb (for
    e.g. again), not to its full size.
    That's two separate issues.
    1 computers (Windows and Mac) which don't have the correct ATA controller cannot access more than 128GB in any one device. Put a 160GB drive in there, and they'll see just 128GB. It doesn't matter how it's partitioned, they won't see past the 128GB line. (This applies to older FireWire enclosures, too. Put a 160GB drive into such an enclosure and you have a 128GB drive.) Attach the same drive to a newer ATA controller, and it will see the full capacity of the drive. If a computer (or PCI controller card, or FireWire enclosure) has 'large drive support' enabled, it can see past 128GB. If it does not, it can't. All G3s (beige and B&W), plus all graphite G4s and the first generation Quicksilver G4s can't see past 128 unless you install a PCI controller card which has large drive support. Later Quicksilvers and all Macs since then can see past 128.
    SCSI drives do not have this limitation. FireWire drives don't either... so long as the controller in the drive enclosure has large drive support enabled.
    2 the major reason for partitioning drives before the arrival of HFS+ was that HFS was not designed for large drives, where 'large' is 'more than 500MB'. (That's megabyte, not gigabyte.) HFS was designed in 1984-5, when a big hard drive was 20MB. (I have files larger than that, now.) There were a fixed number of 'sectors' assigned to the drive, and that was the smallest allocable block which could be written or read from the drive. 500MB drives stored eight bytes per sector, so a file which had one single character in it would take up 8 bytes. 500MB drives also used all the blocks available. Anything bigger than 500MB used the same number of blocks, no more could be allocated under HFS, and merely made them bigger. This meant that a 5GB drive had 80 byte sectors. A 50GB drive had 800 byte sectors. A file which had a single character on it would take up 800 bytes on a 50GB drive, unless the drive was partitioned into smaller logical volumes. If you have a lot of small files, you will be wasting a vast amount of disk space, as files which were really a few bytes in size would be eating hundreds of bytes of space. OS X is a UNIX. UNIX systems have swarms of small files. HFS+, like HFS, also has a limited number of blocks available. However, that number is considerably larger than the 65,535 blocks available to HFS. (I think it's 4,194,303, but I could be wrong.) HFS+ volumes use 8, or even 4 or 2, byte sectors, even for drives whose capacities are measured in hundreds of gigabytes. A small file takes up a whole lot less space. Back when HFS+ first came out, I backed up two separate HFS partitions. Both were 4.25GB in size, both had in excess of 3 GB of stuff on them. I reformatted the drive to one 8.5 GB partition, and restored the files to the single partition. The data took up less than 4.5GB instead of the 6.5-7GB it had taken up under HFS.
    If you do a get info on a file, the Finder will report two sizes: the size the file is, and how much space it takes up on the drive. A good way to see the difference between HFS and HFS+ file systems is to do get infos on the same file on a 10GB or larger partition formatted HFS, and on a similar size partition formatted HFS+.
    Partitioning the drive also prevent problems like a
    total failure of the entire disk and lost all data,
    If you have a physical drive problem you'll lose everything from all partitions. It'd be better to have different drives; it's not likely that two drives will fail at the same time.
    and also accessing a smaller partition is faster.
    But with modern OS technology like OS X and with
    bigger better and faster hard drives, there really
    isn't a need to partition a drive anymore.
    Some people say that there are still reasons to partition a drive. if, for example, you want to set up a dual boot system, the easiest way is to have two partitions. I have the WinBox sitting next to my eMac configured with partitions for WinXP Home, WinXP Pro, WinServer2003, and Ubuntu Linux. Because it's a lot easier to set different actual drives to boot up on my Mac, I don't bother partitioning it. If I did, I'd probably have a partition for the system, so that if something goes belly-up in the system I can reformat it and reinstall without affecting anything else, plus a partition for my apps and data. Or maybe a partition for my apps and a separate partition for my data. That'd make backing up a snap. What I've done is to just have one partition... and to back up the whole thing to a separate drive. FireWire drives are cheap.
    But do remember to practice the good habit of
    backing up your data, because partition or
    not, you would never know when a drive may fail on
    you.
    There are two types of hard drive:
    1 those which have failed
    2 those which haven't failed... yet.
    if you want to keep something, back it up. If you really want to keep something, back it up twice, on different media. (Such as a hard drive and a DVD). If you're truly paranoid about losing something, back it up three or more times. If it's not backed up, you will lose it sooner or later. All it takes is one lightening strike, or a drunk hitting an utility pole and dropping the 24kV primary distribution line on top of the 400v secondary distribution line and sending 24 thousand volts at 20 amps down the 400v line... (Yes, that's happened to me. Yes, I had backups, so data loss was minimal. Yes, the computers were fried, as were a lot of other things. Ever seen a surge protector and a UPS which have been hit by 24,000 volts? No, the insurance company was not happy.)

  • Adding a partition by giving date range

    Hi
    I have a table with partition by range over date in julian format. I want to add a new partition for date range 01-jan-2007 to 15-Feb-2007. There are partitions for last date of January and Feburary.
    Can I add a new partition as
    alter table <table_name> add partition <partition_name>
    values less than <date_in_julian> and greater than <date_in_julian>

    I don't believe that is valid syntax for range partitioning.
    The "greater than" piece is taken care of by the "values less than" definition of the partition prior to the one being added.
    If I understand your question, you have a partition with a date range ending prior to '01-feb-2007' and another partition with a date range ending prior to '01-mar-2007' and you would like to add a partition to cover the date range of 01-jan-2007 through 15-feb-2007? If so that is not possible and would be a different table all together. If you need to adjust the date ranges of current partitions, you could split the february partition, then merge the january partition with the newly split first 1/2 of february. Or if you need a mechanism to focus in on that date range, perhaps a materialized view of the target table specifying that date range.

  • URGENT!!!!  - Reading Archived Data from a Report.

    Hi,
    The data, prior to 15 months, of some tables (BSEG, BSAK),
    used in the report YGF11347 has been archived.
    Previously this report YGF11347 was run for vendor payment information for any time frame
    through FBL1N.  So any vendor line items from 2004, 2005, 2006 and current year,
    all data elements were available to be extracted and displayed on the report.
    Recently the Check Amount is missing when executing the report for payments made prior to
    04/01/2006. This is because of archiving the data prior to 15 months.
    My requirement is tht i need to display this archived data also in the report "YGF11347".
    Can anyone please help me on this issue.
    Regards,
    Akrati

    hi
    good
    check this link, hope this would help you to solve your problem.
    http://www.ams.utoronto.ca/Assets/output/assets/ixos_637070.pdf.pdf
    thanks
    mrutyun^

  • Basic questions on data modeling

    Hi experts,
    I have some basic questions regarding data modeling within MDM. I understand the available table types and the concept of lookup fields. I know that the MDM data modeling concept is different to the relational concept. But having a strong database background my first step was to design a relational data model which I would like to transfer to a MDM repository. Unfortunately I didn't found good information material on this. So here are some questions maybe you can help me:
    1) Is it the right approach to model n:m relationships with multivalued lookup fields? E.g. main table Users with lookup field from subtable SapAccounts (a user can have accounts in different SAP systems, that means more than one account).
    2) Has a record always be unique in MDM repositories (e.g. should we use Auto ID's in every table or do we have to mark a combination of fields as unique)? Is a composite key of 2 or more fields represented with marking these fields as unique?
    3) The concept of relationships in MDM is only based on relationships between single records (not valid for all records in a table)? Is it necessary to define all relationships similar to the relational data model in MDM? Is there something similar to referential integrity in MDM?
    4) Is it possible to change the main table to a sub table later on if we realize that it has also to be used as a lookup table for another table (when extending the data model) or do we have to create a new repository from scratch?
    Thank you for your answers.
    Regards, bd

    Yes you are correct. It is almost difficult to map relational database to mdm one. But again MDM is not 'just' a database. It holds much more 'master' information as compared to any relational db.
    1) Is it the right approach to model n:m relationships with multivalued lookup fields? E.g. main table Users with lookup field from subtable SapAccounts (a user can have accounts in different SAP systems, that means more than one account).
    Yes Here you need to use MV look up tables or can also try Qualifier tables if it gets more complex
    2) Has a record always be unique in MDM repositories (e.g. should we use Auto ID's in every table or do we have to mark a combination of fields as unique)? Is a composite key of 2 or more fields represented with marking these fields as unique?
    Concept of uniqueness differs here that you also have something called Display Fields (DF). A combination of DF can also be treated as Unique one. For instance while importing records if you select these DF as a combination, you will eliminate any possible of duplicates based on this combination. Auto Id is one of the ways to have a unique id once record is within MDM. While you use UF or DF to eliminate any possible duplicates at import level
    3) The concept of relationships in MDM is only based on relationships between single records (not valid for all records in a table)? Is it necessary to define all relationships similar to the relational data model in MDM? Is there something similar to referential integrity in MDM?
    Hmm... good one. Referencial Integrity. What I assume you are talking is that if you have relationships between tables then removing a record will not be possible as it is a foreign key for some record. Here MDM does not allow that. As Relationships within MDM are physical and not conceptual. For instance material can have components. Now if material does not exist then any relationship to components is not worthwile to maintain. Hence relationshsip is eliminated.  While in relational model relationships are more conceptual. Hence with MDM usage of lookups and main table you do not need to maintain these kind of relationships on your own.
    4) Is it possible to change the main table to a sub table later on if we realize that it has also to be used as a lookup table for another table (when extending the data model) or do we have to create a new repository from scratch?
    No. It is not possible to convert main table. There is only one main table and it cannot be changed.
    I went for the same option but it did not work. What I suggest is to look up your legacy system one by one and see what fields in general can be classified as Master, Reference, Transactional - You will start getting answers immediately.

  • Changing the default to location of Archiver data

    Hi,
    Is there any way that we can change the default location of the Archiver data. By default is goes to archives folder, i want it to goto some other folder.
    Similarly when u take up the backup of site. i want it also to be stored in some other folder.
    is it possible.???

    For archiver see My Oracle Support Note
    How can I move the archive directory?          (Doc ID 449054.1)
    Not sure what you mean by a back up. There is no built in Disaster recovery tool for UCM. Standard system back ups (file system and DB) for restore should be used.

  • In PL-SQL archive data from a table to a file

    I am currently developing a vb app where I need to archive data from a table to a file. I was hoping to do this with a stored procedure. I will also need to be able to retrieve the data from the file for future use if necessary. What file types are available? Thanks in advance for any suggestions.

    What about exporting in an oracle binary format? The export files cannot be modifiable. Is there a way to use the export and import utility in PL/SQL?
    null

Maybe you are looking for

  • IMac late 2006 no wifi detected

    I am new to the Mac world other than the basics and I came across a late 2006 20" iMac last week. After installing the only OS I had which is Snow Leopard, I found everything works great on this system except the original wifi card it came with which

  • Purchased Section of iTunes .....

    In the Purchased section of my iTunes account i notice that there is a "Not on this computer" tab with the list of purchased music blank except for the sentence " All your available music has been downloaded to this computer". I wonder why everytime

  • Create new Personnel Area

    Hi Gurus, Need some help with creating a New personnel Area on HCM module. I know how to define and assign a Personnel Area. But need some further infomation on any other nodes that need to be modified when creating a new PA. Any help will be welcome

  • How do I export a movie in iMovie 10.0.1

    How do I export a movie in iMovie 10.0.1

  • ORA-12514 Error when using Net Configuration Assistant

    I have recently installed Oracle 11g on a Windows 2008 server and am now trying to connect via the Oracle client on my local machine. Here is how I've been using Net Configuration Assistant so far: Listener Configuration The name of my listener is LI