Sharepoint_Config mdb (not ldf) file growing
Hi,
My Sharepoint_Config mdb (not the ldf log file) keeps growing a couple of GBs each week. Transaction log backups are done every hour so the ldf file is not growing. SharePoint_Config is in full recovery mode. The cause may be because I am doing
a PowerShell full farm backup once a day.
Has anyone encountered this before, and what is your recommendation? What is the best way to keep the mdb file small or shrink the mdb file?
Thank you.
Can you check the size of each table? Run the following on your Configuration database. Run it a few days later to see what has changed:
SELECT
t.NAME AS TableName,
s.Name AS SchemaName,
p.rows AS RowCounts,
SUM(a.total_pages) * 8 AS TotalSpaceKB,
SUM(a.used_pages) * 8 AS UsedSpaceKB,
(SUM(a.total_pages) - SUM(a.used_pages)) * 8 AS UnusedSpaceKB
FROM
sys.tables t
INNER JOIN
sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN
sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN
sys.allocation_units a ON p.partition_id = a.container_id
LEFT OUTER JOIN
sys.schemas s ON t.schema_id = s.schema_id
WHERE
t.NAME NOT LIKE 'dt%'
AND t.is_ms_shipped = 0
AND i.OBJECT_ID > 255
GROUP BY
t.Name, s.Name, p.Rows
ORDER BY
t.Name
My assumption would be that the TimerJobHistory table is growing.
Trevor Seward
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.
Similar Messages
-
Their original SQL Server was 2008, with SP unknown, but installed on D: and C: drives. The power spike corrupted their O/S on the C:
drive and someone reinstalled both the O/S and the SQL Server, which is now SQL 2008, SP4. They have intact files for all system databases, both .mdb and .ldf. Is there some way they can reconnect with the user databases using the intact copies
of the previous system databases? I have heard that if the SQL Server is stopped, previous Master and Msdb .mdb and .ldf files moved into place and the server restarted, that any previous user database .mdb and .ldf files can be accessed by the SQL server.
Is this the case, or are there details missing?Hello! try this steps
1 Open your SQL Server Management Studio console. This application shortcut is available in the SQL Server directory in the Windows Start button.
2 Enter the system administrator user name and password. SQL Server's administrator user name is "sa." This account is required for privileges to restore the database. If your restoring on a host provider server, use the administrator user name
and password they supplied for your account.
3 Right-click your database name and select "Attach." In the new window that opens, click the "Add" button to open a dialog box.
4 Select your MDF file and press the "Ok" button. It may take several minutes to restore the database if it is a large file. Once the process is finished, browse your tables to verify the data. The database is now restored.
If nothing helped try to use:
https://www.youtube.com/watch?v=1cbOYdvBW2c -
Could not find file ERROR while connecting to access (mdb) file
I am getting the following error while connecting to the database:
javax.servlet.ServletException: [Microsoft][ODBC Microsoft Access Driver] Could not find file 'C:\WINNT\system32\Database1.mdb'.
Following is my code:
<%
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
String database="jdbc:odbc:Driver={Microsoft Access Driver (*.mdb)};DBQ=";
String filename="D://BeaTemp//Database1";
database+= filename.trim() + ";DriverID=22;READONLY=true}";
Connection con=DriverManager.getConnection(database, "", "");
Statement st=con.createStatement();
ResultSet rs=st.executeQuery("select * from Database1.Employee");
ResultSetMetaData rsmd=rs.getMetaData();
while(rs.next()){
for(int i=1;i<=rsmd.getColumnCount();i++) {
out.println(rs.getString(i));
%>
What could be the problem..??
Thankx in advance..
Satish.Got the solution.
The statement:
ResultSet rs=st.executeQuery("select * from Database1.Employee");
should be changed as:
ResultSet rs=st.executeQuery("select * from Employee");
Now, it is working fine.
Regards,
Satish. -
Does shrinking LDF file have any impact on restoration?
Right now my LDF file is almost 50GB. My database is only of 10 GB size.
Here are my questions:
1. Why is the LDF file so huge?
2. Is there any way to avoid the LDF file to grow so big?
3. If I use shrink, will I have any issue while I am restoring the database in the future.Hi,
1. Can be many reason , do you take frequent transaction log backups.
2. yes by taking proper transaction log backup and putting correct and reasonable value of autogrwth for data and log file
3. No, not at all. Shrinking log file only removes free space.
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
Hi
I have a query regarding reading zip files created by the following class:
class CreateZip extends Thread {
CreateZip(Vector FileList) {
try {
File zipFile = new File("C:/test.zip");
if (zipFile.exists()) {
System.err.println("Zip file already exists, please try another");
System.exit(0);
FileOutputStream fos = new FileOutputStream(zipFile);
ZipOutputStream zos = new ZipOutputStream(fos);
int bytesRead;
byte[] buffer = new byte[1024];
CRC32 crc = new CRC32();
for (int i=0, n=FileList.size(); i < n; i++) {
String name = FileList.get(i).toString();
File file = new File(name);
//Check to see whether the file already exists or whether
//it is not a file ie directory...
if (!file.exists() || !file.isFile()) {
System.err.println("Skipping: " + name);
continue;
BufferedInputStream bis = new BufferedInputStream(new FileInputStream(file));
crc.reset();
while ((bytesRead = bis.read(buffer)) != -1) {
crc.update(buffer, 0, bytesRead);
bis.close();
bis = new BufferedInputStream(new FileInputStream(file));
ZipEntry entry = new ZipEntry(name);
entry.setMethod(ZipEntry.STORED);
entry.setCompressedSize(file.length());
entry.setSize(file.length());
entry.setCrc(crc.getValue());
zos.putNextEntry(entry);
while ((bytesRead = bis.read(buffer)) != -1) {
zos.write(buffer, 0, bytesRead);
bis.close();
zos.close();
} catch (Exception e) {
e.printStackTrace();
}It accepts a vector which contains files and/or folders and this is supposed to add them to a zip file.
All that happens is the zip file is created but when I open it there are no files inside that are visible. The file size does appear to grow and there are no errors generated.
Any ideas?
Many thanksYou wnet to the CVI link instead of the LabVIEW one. The LabVIEW driver can be found here.
-
Could not load file or assembly AjaxToolkit error in SharePoint 2013 Visual Webpart
I wanted to use ajax controls in visual webpart for sharepoint 2013 so i followed steps.
1-I cut this snippet (see below) from <head></head> Section and paste in <body></body> Section for my master page since it's a requirement for SharePoint 2013 ajax functionality.
Snippet:
<!--MS:<SharePoint:AjaxDelta id="DeltaSPWebPartManager" runat="server">-->
<!--MS:<WebPartPages:SPWebPartManager runat="server">-->
<!--ME:</WebPartPages:SPWebPartManager>-->
<!--ME:</SharePoint:AjaxDelta>-->
2-downloaded a latest ajax tool kit and then added ajax toolkit dll from the toolkit library in my solution's bin-->debug folder
now add ajax entries in web.config in these sections
in <SafeControls>
<SafeControl Assembly="ajaxcontroltoolkit, Version=4.1.7.1213, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e, processorArchitecture=MSIL" Namespace="AjaxControlToolkit" TypeName="*" />
in <assemblies>
<add assembly="AjaxControlToolkit, Version=4.1.7.1213, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e" />
in <controls>
<add tagPrefix="ajax" namespace="AjaxControlToolkit" assembly="AjaxControlToolkit, Version=4.1.7.1213, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e" />
finally registered ajax in visual web part as
<%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="ajaxtoolkit" %>
added following calender control
<asp:TextBox MaxLength="10" ID="txtToDate" runat="server" CssClass="FormTextBoxSmall"></asp:TextBox>
<asp:ImageButton ID="calReqDateTo" runat="server" ImageUrl="" />
<ajaxtoolkit:CalendarExtender id="CalendarExtender2" runat="server" targetcontrolid="txtToDate"
format="dd/MM/yyyy" popupbuttonid="calReqDateTo" />
reset iis and deployed my solution but when i deployed it throws error
"could not load file or assembly AjaxToolkit,Version=4.1.7.1213 "
please helpHi,
In this below thread they have discussed and resolved your issue.
http://social.technet.microsoft.com/Forums/sharepoint/en-US/60fa19fe-86a0-446b-b61f-11a82fe4287f/how-to-implement-ajax-toolkit-for-sharepoint2013?forum=sharepointdevelopment&prof=required
Please let us know if you need further details
Sekar - Our life is short, so help others to grow
Whenever you see a reply and if you think is helpful, click "Vote As Helpful"! And whenever
you see a reply being an answer to the question of the thread, click "Mark As Answer -
Does iCal's speed slow as file grows?
I'm concerned that iCal is going to slow down as the program gets loaded up with events and to do lists.
Is anybody using iCal for a small business making 20 to 30 entries per day and seeing a slowdown in the program's speed?
We sync. iCal with three computers and we need the program remain fast to use and sync. for our staff.
Is this free iCal program adequate for a small business or should I be looking at a program like Now Up to Date & Contact for my needs?
I spoke to support at BusySync (busymac.com) as an alternative to .Mac sync. for iCal and they said the had no reports of iCal slowing down as the file gets larger.
Any comments are appreciated.It's worse than that, iCal doesn't seem to slow as files grow, it seems to slow randomly and no one on this forum has the faintest idea why. Search "slow" and look back. There are hundreds of entrys from people who've experienced iCal slowing to a snail's pace and no one has a solution. I made the mistake of moving my business (maybe 1 or 2 entries a day) to iCal and now I'm stuck waiting 10 seconds everytime I want to make the smallest change. AM to PM - 10 seconds, one calendar to another - 10 seconds. It takes even longer to duplicate an entry or copy an entry. I tried to change fourteen entries to a different calendar today and had to sit through seven minutes of the spinning beachball waiting for that simple change. It doesn't sync reliably with the four other Macs I own and I'm also convinced it's crashing iTunes on me everytime I try to sync Contacts and Calendars. I've been a Mac devotee since 1987 and I swear this is the worst single program Apple has ever written. In System 9 I was able to write a better calendar program in Hypercard. It's not worth the time it takes to install. I'd think twice before I used it for anything important!
I wish someone would at least acknowledge the problem!!!!
Peter May -
Optimal Locations for SQL 2005 TempDB & ReportServerTempDB MDF & LDF Files
Hello. I have SQL Server 2005 SP4 installed on Windows Server 2003 R2 Standard SP2.
The server has four CPU cores. The server was configured with the following drive configuration.
RAID 1 - System drive
RAID 5 - SQL MDF files
RAID 1 - SQL LDF files
There is currently one of each of the following files on the server.
The MDF files are on the RAID 5 array, while the LDF files are on the RAID1 array for SQL LDF files.
TempDB.mdf
ReportServerTempDB.mdf
TempLog.ldf
ReportServerTempDB_log.ldf
It is my understanding that having one TempDB per core is a best practice, so I should create three more TempDB’s.
Does that also apply to ReportServerTempDB? Given the drive configuration of this server, where should all of the required MDF and LDF files for the TempDB’s and ReportServerTempDB’s be located?You cannot create more than 1 tempb in any SQL Instance. The recommendations are to have additional tempdb *data files* depending on cores upto a max of 8. Obviously this depends on every environment. This recommendation is only for tempdb. It is not
for ReportServerTempDB’s. They are totally different and is not used similar to tempdb.
http://www.sqlskills.com/blogs/paul/a-sql-server-dba-myth-a-day-1230-tempdb-should-always-have-one-data-file-per-processor-core/
http://blogs.msdn.com/b/cindygross/archive/2009/11/20/compilation-of-sql-server-tempdb-io-best-practices.aspx
Now, it is good to keep tempdb in its own drive, but because you dont have one specific, you can just put tempdb data files in data files drive and log file in log files drive.
You might want to consider moving them to a seperate disk if you see IO contention.
Regards, Ashwin Menon My Blog - http:\\sqllearnings.com -
903/Enterprise; log files grow boundlessly and weekly bounces are needed
This feed back is the result of an eval of 903 IAS
enterprise on Solaris, 2 box cluster.
This feed back is FYI.
After installing IAS Enterprise 903 many times, testing
stop/start/reboot use cases I noticed that many internal
log files grow boundlessly.
I've seen the notes on turning down log levels. This
is still far from adequate. Log files in a 24x365
architecture must be managable and have finite maximum disk
space consumption sizes setable.
Framework error events also need to be hooked into an Enterprise
error and alert protocol. SNMP is not my favorite, but it's
a start. JMX is my favorite. Also check out the free
Big Brother monitor; www.bb4.com and it's wire protocol.
Thanks and best of luck,
CurtJust curious whether other folks have any opinions on
OEM's suitability for a high rel. environment?
On log file management?
curt -
We have a situation where someone deleted content of a table from the database. We have a backup of the MDF/LDF file where we can restore the database from - though we do not want to restore the entire database but only one table. I was thinking of creating
a new database on the same SQL instance, restoring the mdf/ldf files to this newly created database and then import the information from the this database to the new db. How do I go about doing this using SQL server? Any help will be greatly appreciated. -RS
Hi VJ Shah,
If you are using SQL Server 2005 and higher, then we could use the Import and Export Wizard to copy the new table's data to the old one. Here are the general steps:
Right click old database, select Tasks and then select Import Data…
On the Choose a Data Source page, type your server name and select the new database, and then Next.
On the Choose a Destination page, type your server name and select your old database, and then Next.
On the Specify Table Copy or Query page, select Copy data from one or more tables or views, and the Next.
On the Select Source Tables and Views, select the table which you want to copy,
specify the destination on the right column, and then Next.
If there are any unclear, please feel free to ask.
Thanks,
Weilin Qiao
Please remember to mark the replies as answers if they help and unmark them if they provide no help. This can be beneficial to other community members reading the thread. -
CS6 can not save files over 2GB???????
I need to make a giant poster at 6ft wide x 5 ft tall. I created a blank document in CS6 at 200ppi. I created some text, and imported a couple of 25MB images. I am unable to save the file as a .PSD because it indicates that PS can not save files over 2GB.
1. How did this file grow to become over 2GB?
2. Why can PS not save a .PSD over 2GB?
3. What do you do if you need to create a large format print such as this?
Thanks in advance.Very true.
However, Jeff knows exactly what he's doing and he's very, very good at it. If your image were fortunate enough to end up being printed by Jeff Schewe, he'd be able to tell you exactly what kind of an image file he wants from you.
Always check with the printer. -
CF9 - Best way to compare LDAP data in .ldf file against database?
Hello, everyone.
I am tasked with creating a script that will take LDAP information from a .ldf file and compare each entry to a database, adding names that meet certain criteria.
What is the best way to do this? I've never worked with LDFs, before.
Thank you,
^_^JDBC concerns connecting to the database. I have only worked on Oracle and I am not sure if this function is one of theirs or SQL standard. Basically it works like this:
The following statement combines results with the MINUS operator, which returns only rows returned by the first query but not by the second:
SELECT product_id FROM inventories
MINUS
SELECT product_id FROM order_items;
taken from : http://www.cs.ncl.ac.uk/teaching/facilities/swdoc/oracle9i/server.920/a96540/queries5a.htm -
InfoSpoke fails with message "Could not open file on application server"
BW Experts,
I created an InfoSpoke that is configured to extract to a flat file. The name of the file is specified using a logical filename. During extraction, the infospoke reports the error message "Could not open file on application server" adnd marks the extraction process as red(failed). I have tried to run the InfoSpoke in background mode and in dialog mode and the same error appears. After i run in dialog mode, i checked SU53 for authorization errors and did not find any. I also tried changing the Logical filename setup in transaction FILE to a more "friendly" directory in which i'm sure i have authorizations (e.g. my UNIX home directory) and im still getting errors.
Can you please share your thoughts on this? Any help will greatly appreciated. I also promise to award points.Hi Nagesh,
Thanks for helping out.
If i configure the InfoSpoke to download to a file using "File Name" option and also check the "Application server" checkbox, the extract works correctly (extraction to the defualt SAP path and filename=infospoke name). If i configure the InfoSpoke to download to the local workstation, the InfoSpoke also works correctly. It's only when i configure it to download to the application server and use the "Logical filename" option, that i encounter a failed extract.
Here are the messages is the Open Hub Monitor:
(red icon) Request No.147515
0 Data Records
Runtime 1 sec.
(red icon)Run No. 1
0 Data Records
Runtime 1 sec.
Messages for Run
Extraction is running RSBO 305
Could not open file on application server RSBO 214
Request 147515 was terminated before extraction RSBO 326
Request 1475151: Error occured RSBO 322
If i clink on the error message, no messages nor clues are displayed. Note, this is a new InfoSpoke that is currently in the dev system.
Please help. -
2GB OR NOT 2GB - FILE LIMITS IN ORACLE
제품 : ORACLE SERVER
작성날짜 : 2002-04-11
2GB OR NOT 2GB - FILE LIMITS IN ORACLE
======================================
Introduction
~~~~~~~~~~~~
This article describes "2Gb" issues. It gives information on why 2Gb
is a magical number and outlines the issues you need to know about if
you are considering using Oracle with files larger than 2Gb in size.
It also
looks at some other file related limits and issues.
The article has a Unix bias as this is where most of the 2Gb issues
arise but there is information relevant to other (non-unix)
platforms.
Articles giving port specific limits are listed in the last section.
Topics covered include:
Why is 2Gb a Special Number ?
Why use 2Gb+ Datafiles ?
Export and 2Gb
SQL*Loader and 2Gb
Oracle and other 2Gb issues
Port Specific Information on "Large Files"
Why is 2Gb a Special Number ?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Many CPU's and system call interfaces (API's) in use today use a word
size of 32 bits. This word size imposes limits on many operations.
In many cases the standard API's for file operations use a 32-bit signed
word to represent both file size and current position within a file (byte
displacement). A 'signed' 32bit word uses the top most bit as a sign
indicator leaving only 31 bits to represent the actual value (positive or
negative). In hexadecimal the largest positive number that can be
represented in in 31 bits is 0x7FFFFFFF , which is +2147483647 decimal.
This is ONE less than 2Gb.
Files of 2Gb or more are generally known as 'large files'. As one might
expect problems can start to surface once you try to use the number
2147483648 or higher in a 32bit environment. To overcome this problem
recent versions of operating systems have defined new system calls which
typically use 64-bit addressing for file sizes and offsets. Recent Oracle
releases make use of these new interfaces but there are a number of issues
one should be aware of before deciding to use 'large files'.
What does this mean when using Oracle ?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The 32bit issue affects Oracle in a number of ways. In order to use large
files you need to have:
1. An operating system that supports 2Gb+ files or raw devices
2. An operating system which has an API to support I/O on 2Gb+ files
3. A version of Oracle which uses this API
Today most platforms support large files and have 64bit APIs for such
files.
Releases of Oracle from 7.3 onwards usually make use of these 64bit APIs
but the situation is very dependent on platform, operating system version
and the Oracle version. In some cases 'large file' support is present by
default, while in other cases a special patch may be required.
At the time of writing there are some tools within Oracle which have not
been updated to use the new API's, most notably tools like EXPORT and
SQL*LOADER, but again the exact situation is platform and version specific.
Why use 2Gb+ Datafiles ?
~~~~~~~~~~~~~~~~~~~~~~~~
In this section we will try to summarise the advantages and disadvantages
of using "large" files / devices for Oracle datafiles:
Advantages of files larger than 2Gb:
On most platforms Oracle7 supports up to 1022 datafiles.
With files < 2Gb this limits the database size to less than 2044Gb.
This is not an issue with Oracle8 which supports many more files.
In reality the maximum database size would be less than 2044Gb due
to maintaining separate data in separate tablespaces. Some of these
may be much less than 2Gb in size.
Less files to manage for smaller databases.
Less file handle resources required
Disadvantages of files larger than 2Gb:
The unit of recovery is larger. A 2Gb file may take between 15 minutes
and 1 hour to backup / restore depending on the backup media and
disk speeds. An 8Gb file may take 4 times as long.
Parallelism of backup / recovery operations may be impacted.
There may be platform specific limitations - Eg: Asynchronous IO
operations may be serialised above the 2Gb mark.
As handling of files above 2Gb may need patches, special configuration
etc.. there is an increased risk involved as opposed to smaller files.
Eg: On certain AIX releases Asynchronous IO serialises above 2Gb.
Important points if using files >= 2Gb
Check with the OS Vendor to determine if large files are supported
and how to configure for them.
Check with the OS Vendor what the maximum file size actually is.
Check with Oracle support if any patches or limitations apply
on your platform , OS version and Oracle version.
Remember to check again if you are considering upgrading either
Oracle or the OS in case any patches are required in the release
you are moving to.
Make sure any operating system limits are set correctly to allow
access to large files for all users.
Make sure any backup scripts can also cope with large files.
Note that there is still a limit to the maximum file size you
can use for datafiles above 2Gb in size. The exact limit depends
on the DB_BLOCK_SIZE of the database and the platform. On most
platforms (Unix, NT, VMS) the limit on file size is around
4194302*DB_BLOCK_SIZE.
Important notes generally
Be careful when allowing files to automatically resize. It is
sensible to always limit the MAXSIZE for AUTOEXTEND files to less
than 2Gb if not using 'large files', and to a sensible limit
otherwise. Note that due to <Bug:568232> it is possible to specify
an value of MAXSIZE larger than Oracle can cope with which may
result in internal errors after the resize occurs. (Errors
typically include ORA-600 [3292])
On many platforms Oracle datafiles have an additional header
block at the start of the file so creating a file of 2Gb actually
requires slightly more than 2Gb of disk space. On Unix platforms
the additional header for datafiles is usually DB_BLOCK_SIZE bytes
but may be larger when creating datafiles on raw devices.
2Gb related Oracle Errors:
These are a few of the errors which may occur when a 2Gb limit
is present. They are not in any particular order.
ORA-01119 Error in creating datafile xxxx
ORA-27044 unable to write header block of file
SVR4 Error: 22: Invalid argument
ORA-19502 write error on file 'filename', blockno x (blocksize=nn)
ORA-27070 skgfdisp: async read/write failed
ORA-02237 invalid file size
KCF:write/open error dba=xxxxxx block=xxxx online=xxxx file=xxxxxxxx
file limit exceed.
Unix error 27, EFBIG
Export and 2Gb
~~~~~~~~~~~~~~
2Gb Export File Size
~~~~~~~~~~~~~~~~~~~~
At the time of writing most versions of export use the default file
open API when creating an export file. This means that on many platforms
it is impossible to export a file of 2Gb or larger to a file system file.
There are several options available to overcome 2Gb file limits with
export such as:
- It is generally possible to write an export > 2Gb to a raw device.
Obviously the raw device has to be large enough to fit the entire
export into it.
- By exporting to a named pipe (on Unix) one can compress, zip or
split up the output.
See: "Quick Reference to Exporting >2Gb on Unix" <Note:30528.1>
- One can export to tape (on most platforms)
See "Exporting to tape on Unix systems" <Note:30428.1>
(This article also describes in detail how to export to
a unix pipe, remote shell etc..)
Other 2Gb Export Issues
~~~~~~~~~~~~~~~~~~~~~~~
Oracle has a maximum extent size of 2Gb. Unfortunately there is a problem
with EXPORT on many releases of Oracle such that if you export a large table
and specify COMPRESS=Y then it is possible for the NEXT storage clause
of the statement in the EXPORT file to contain a size above 2Gb. This
will cause import to fail even if IGNORE=Y is specified at import time.
This issue is reported in <Bug:708790> and is alerted in <Note:62436.1>
An export will typically report errors like this when it hits a 2Gb
limit:
. . exporting table BIGEXPORT
EXP-00015: error on row 10660 of table BIGEXPORT,
column MYCOL, datatype 96
EXP-00002: error in writing to export file
EXP-00002: error in writing to export file
EXP-00000: Export terminated unsuccessfully
There is a secondary issue reported in <Bug:185855> which indicates that
a full database export generates a CREATE TABLESPACE command with the
file size specified in BYTES. If the filesize is above 2Gb this may
cause an ORA-2237 error when attempting to create the file on IMPORT.
This issue can be worked around be creating the tablespace prior to
importing by specifying the file size in 'M' instead of in bytes.
<Bug:490837> indicates a similar problem.
Export to Tape
~~~~~~~~~~~~~~
The VOLSIZE parameter for export is limited to values less that 4Gb.
On some platforms may be only 2Gb.
This is corrected in Oracle 8i. <Bug:490190> describes this problem.
SQL*Loader and 2Gb
~~~~~~~~~~~~~~~~~~
Typically SQL*Loader will error when it attempts to open an input
file larger than 2Gb with an error of the form:
SQL*Loader-500: Unable to open file (bigfile.dat)
SVR4 Error: 79: Value too large for defined data type
The examples in <Note:30528.1> can be modified to for use with SQL*Loader
for large input data files.
Oracle 8.0.6 provides large file support for discard and log files in
SQL*Loader but the maximum input data file size still varies between
platforms. See <Bug:948460> for details of the input file limit.
<Bug:749600> covers the maximum discard file size.
Oracle and other 2Gb issues
~~~~~~~~~~~~~~~~~~~~~~~~~~~
This sections lists miscellaneous 2Gb issues:
- From Oracle 8.0.5 onwards 64bit releases are available on most platforms.
An extract from the 8.0.5 README file introduces these - see <Note:62252.1>
- DBV (the database verification file program) may not be able to scan
datafiles larger than 2Gb reporting "DBV-100".
This is reported in <Bug:710888>
- "DATAFILE ... SIZE xxxxxx" clauses of SQL commands in Oracle must be
specified in 'M' or 'K' to create files larger than 2Gb otherwise the
error "ORA-02237: invalid file size" is reported. This is documented
in <Bug:185855>.
- Tablespace quotas cannot exceed 2Gb on releases before Oracle 7.3.4.
Eg: ALTER USER <username> QUOTA 2500M ON <tablespacename>
reports
ORA-2187: invalid quota specification.
This is documented in <Bug:425831>.
The workaround is to grant users UNLIMITED TABLESPACE privilege if they
need a quota above 2Gb.
- Tools which spool output may error if the spool file reaches 2Gb in size.
Eg: sqlplus spool output.
- Certain 'core' functions in Oracle tools do not support large files -
See <Bug:749600> which is fixed in Oracle 8.0.6 and 8.1.6.
Note that this fix is NOT in Oracle 8.1.5 nor in any patch set.
Even with this fix there may still be large file restrictions as not
all code uses these 'core' functions.
Note though that <Bug:749600> covers CORE functions - some areas of code
may still have problems.
Eg: CORE is not used for SQL*Loader input file I/O
- The UTL_FILE package uses the 'core' functions mentioned above and so is
limited by 2Gb restrictions Oracle releases which do not contain this fix.
<Package:UTL_FILE> is a PL/SQL package which allows file IO from within
PL/SQL.
Port Specific Information on "Large Files"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Below are references to information on large file support for specific
platforms. Although every effort is made to keep the information in
these articles up-to-date it is still advisable to carefully test any
operation which reads or writes from / to large files:
Platform See
~~~~~~~~ ~~~
AIX (RS6000 / SP) <Note:60888.1>
HP <Note:62407.1>
Digital Unix <Note:62426.1>
Sequent PTX <Note:62415.1>
Sun Solaris <Note:62409.1>
Windows NT Maximum 4Gb files on FAT
Theoretical 16Tb on NTFS
** See <Note:67421.1> before using large files
on NT with Oracle8
*2 There is a problem with DBVERIFY on 8.1.6
See <Bug:1372172>I'm not aware of a packaged PL/SQL solution for this in Oracle 8.1.7.3 - however it is very easy to create such a program...
Step 1
Write a simple Java program like the one listed:
import java.io.File;
public class fileCheckUtl {
public static int fileExists(String FileName) {
File x = new File(FileName);
if (x.exists())
return 1;
else return 0;
public static void main (String args[]) {
fileCheckUtl f = new fileCheckUtl();
int i;
i = f.fileExists(args[0]);
System.out.println(i);
Step 2 Load this into the Oracle data using LoadJava
loadjava -verbose -resolve -user user/pw@db fileCheckUtl.java
The output should be something like this:
creating : source fileCheckUtl
loading : source fileCheckUtl
creating : fileCheckUtl
resolving: source fileCheckUtl
Step 3 - Create a PL/SQL wrapper for the Java Class:
CREATE OR REPLACE FUNCTION FILE_CHECK_UTL (file_name IN VARCHAR2) RETURN NUMBER AS
LANGUAGE JAVA
NAME 'fileCheckUtl.fileExists(java.lang.String) return int';
Step 4 Test it:
SQL> select file_check_utl('f:\myjava\fileCheckUtl.java') from dual
2 /
FILE_CHECK_UTL('F:\MYJAVA\FILECHECKUTL.JAVA')
1 -
Hi Folks,
I was trying to save basic search template master page "seattle.master" after making change to the template.
I have added just "CompanyName" folder and update the line below in seattle.master.
Change is this : <SharePoint:CssRegistration Name="Themable/CompanyName/corev15.css" runat="server"/>
When I save it, and refresh page on browser, it shows "Something went wrong" error.
ULS says the following error : "UserAgent not Available, file operation may not be optimized"
Pls let us know if there is a solution.
Any help Much appreciated !
Thanks,
Sal
Hi Salman,
Thanks for posting this issue,
Just remove this below given tag and check out. It might be happened that your control is conflicting with others.
Also, browse the below mentioned URL for more details
http://social.msdn.microsoft.com/Forums/office/en-US/b32d1968-81f1-42cd-8f45-798406896335/how-apply-custom-master-page-to-performance-point-dashboard-useragent-not-available-file?forum=sharepointcustomization
I hope this is helpful to you. If this works, Please mark it as Answered.
Regards,
Dharmendra Singh (MCPD-EA | MCTS)
Blog : http://sharepoint-community.net/profile/DharmendraSingh
Maybe you are looking for
-
Abfrage der Seriennummer bei einem Create-Cloud-Abo
Ich habe seit Februar 2014 ein Creative Cloud-Abo und nutze seit dem alle Cloud-Programme - auch den Acrobat XI. Heute (am 14.06.2014) poppt auf einmal - nachdem ich den Acrobat geöffnet habe - ein Fenster auf wo drin steht "Acrobat XI - Testversion"
-
Update notification for Iphoto and when I attempt to install it says I need at least version 10.8.2 installed, when I check my software update it says I have the newest software with 10.8.1?
-
Reducing the time for select query
Hi Please advise why the query is taking too much time to execute when the cost involved in the joins is less. Here is the explain plan for the same | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 4989
-
PO condition type calculation formula.
Dear there, Hallo, i going to develop a customize create Purchase order screen (same like ME21n), and i facing a problem in the condition type calculation part, because i dun know how to get the formula of each condition type calculation so i c
-
Problem with a series with multipul books
Especially within the Science Fiction and Fantasy categories, there are a number of multi-book series. Within the Kindle these books are identified by their position within the series. For example Title, book 2 of 9 for X series. The iBook store give