Documaker 12.1 Standard Edition Performance Problem
We are having performance problems with our Documaker version 12.1 Standard Edition large batch runs. We are seeing much longer run times (5+ hours for a large line of business with a 30,000 policy declaration file) than we are willing to accept. Is or has anyone else had similar experiences? Any suggestions for optimizing our Documaker batch performance?
You may have to be more specific and/or report your question to Support.
I do recall that there is a performance improvement in 12.1 related to using table rows declared in repeating sections. The more sections you repeated with the table-row, the slower it would become. I think this is patch 12.1.3. I don't know if that patch is released yet. As I said, you should check with the Support site. If you have already reported to Support, check the status of your BugDB entry as this might be your fix in the works.
If your situation is not related to using repeating table rows, then perhaps you could describe your setup and anything unusual about your situation.
Similar Messages
-
Oracle 10g Enterprise vs Standard Edition performance
For a database hosted on a single machine is there any performance advantage in using Oracle 10g Enterprise Edition over Standard Edition? For our application we do not require data warehousing or OLAP, just the ability to store and query a large amount of XML as CLOBs or using sturctured mapping.
I have looked at online documentation but have not been able to find the answer. Any advice would be appreciated.Probably not. Unless there is some EE feature you can leverage, there probably won't be a performance difference.
Justin -
Problem Moving From the Standard Edition to the Enterprise Edition
Hi !
I am trying to move from Oracle 10g R2 10.2.0.1 Standard Edition to Oracle 10g R2 10.2.0.1 Enterprise Edition.
Platform is MS Windows XP PRO SP2
I follow all steps described in "Oracle® Database Upgrade Guide 10g Release 2 (10.2)" - section Moving From the Standard Edition to the Enterprise Edition.
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14238/intro.htm#sthref10
The problem is that after update from St. edition to Ent. edition when I run:
select * from v$version - I see :
Connected to:
Oracle Database 10g Release 10.2.0.1.0 - Production
while I expect to see something like .....
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining Scoring Engine options
Am I wrong ? Why information in view v$version does not update after this ?
Regards,
Phil
Message was edited by:
user585467In your document you refer to the upgrade guide, this upgrade guide applies when you are upgrading from previous Oracle releases to 10g, not SE->EE.
If you want a procedure to upgrade the SE to EE, then follow this metalink note:
"How to Upgrade from Standard to Enterprise Edition ?
Doc ID: Note:117048.1"
If you performed this specific upgrade and you still see this error, then probably you are facing a bug, similar to the one described in "5844095", where the version is changed form SE->EE after 10.2.0.3 patchset application.
In your case it should be enough to apply 10.2.0.3 patchset to show it properly.
~ Madrid -
Problem installing Oracle 8i(8.1.7) standard edition on windows 2000 server
I am trying to install Oracle 8i (8.1.7) standard edition on windows 2000 server but the setup program doesn't run.
It starts up, allocates some ram then closes before a window even pops up.
I also have MSSQL server installed on this computer, does that conflict?
Has anyone encountered this problem before? If so, please give a solution if you have one, thanks,
OPSYes. There is a bug with Pent4, Win2000 & Oracle8i.
Here is the workaround:
This is a known bug with Oracle java-based applications and Pentium 4 Use this work-around:
1) Copy the install directory from the CD to the hard disk.
2) Edit oraparam.ini and modify it as follows:
* Edit the "SOURCE=" line to to show the full path to the CD ; SOURCE={CD DRIVE LETTER}:\stage\products.jar)
* Edit the "JRE_LOCATION" line to show the full path to the CD ;
JRE_LOCATION={CD DRIVE
LETTER}:\stage\Components\oracle\swd\jre\1.1.7\1\DataFiles\Expanded)
* Edit the "OUI_LOCATION" line to show the full path to the CD ;
OUI_LOCATION={CD DRIVE
LETTER}:\stage\Components\oracle\swd\oui\1.6.0.9.0\1\DataFiles\Expanded
* Edit the "JRE_MEMORY_OPTIONS" line to include "-nojit" as the first argument ;
JRE_MEMORY_OPTIONS=-nojit
-ms16m -mx32m
3) Start setup.exe from your hard drive
Choose a Custom install and choose not to create a database during the install.
This way, the Database
Configuration Assistant will not be launched during installation. DONT SELECT
NET8 products for installation.
You need to put the same java parameters for the DB configurator and the Net8
Configurator
to run.
-nojit -ms16m -mx32m before -classpath on:
* assistants\dbca\DBAssist.cl
* network\tools\netca.cl
** Run the DataBaseConfigurationAssistant and NetworkConfigurationAssistant after installation.
Hope it helps!
Amrit -
Sector size problem when create a "Standby" in Standard Edition
Hi,
I've a primary database 11.2.0.4 Standard Edition with Grid Infrastructure with ASM.
DiskGroup in ASM uses sector size ok 4k.
I've also a "Standby" the uses filesystem with logical and physical sector size of 512.
When duplicate from primary I get this warning:
ORACLE error from auxiliary database: ORA-01378: The logical block size (4096) of file /u03/oradata/ORCL/onlinelog/group_1.268.874153703 is not compatible with the disk sector size (media sector size is 512 and host sector size is 512)
RMAN-05535: WARNING: All redo log files were not defined properly.
ORACLE error from auxiliary database: ORA-01378: The logical block size (4096) of file /u03/oradata/ORCL/onlinelog/group_2.265.874153719 is not compatible with the disk sector size (media sector size is 512 and host sector size is 512)
RMAN-05535: WARNING: All redo log files were not defined properly.
ORACLE error from auxiliary database: ORA-01378: The logical block size (4096) of file /u03/oradata/ORCL/onlinelog/group_3.263.874153759 is not compatible with the disk sector size (media sector size is 512 and host sector size is 512)
RMAN-05535: WARNING: All redo log files were not defined properly.
ORACLE error from auxiliary database: ORA-01378: The logical block size (4096) of file /u03/oradata/ORCL/onlinelog/group_4.261.874153735 is not compatible with the disk sector size (media sector size is 512 and host sector size is 512)
RMAN-05535: WARNING: All redo log files were not defined properly.
Finished Duplicate Db at 2015-04-21 23:37:10
released channel: CH1
I can recreate 3 redologs with correct sector size, but I unable to drop current log file:
ERROR at line 1:
ORA-01623: log 4 is current log for instance ORCL (thread 1) - cannot drop
ORA-00312: online log 4 thread 1:
'/u03/oradata/ORCL/onlinelog/group_4.261.874153735'
So how can I solve this problem?yasinyazici wrote:
Hi
Oracle Data Guard is available only as a feature of Oracle Database Enterprise Edition. It is not available with Oracle Database Standard Edition.
Please look 2.3.2 Oracle Software Requirements section of the following document
http://docs.oracle.com/cd/E11882_01/server.112/e41134/standby.htm#SBYDB4716
Yasin
It's possible:
Data Guard and Oracle Standard Edition (Doc ID 305360.1)
Alternative for standby database in standard edition (Doc ID 333749.1) -
Performance Tab in OEM ( Oracle database Standard Edition )
We have Oracle database 11g ( Standard edition ONE).
We have installed OEM (oracle enterprise manager) with database installation.
We are not able to access the PERFORMANCE tab in OEM.
1) Is it because Performance tab cannot be seen in Standard Edition ONE version ?
2) When I tried to setup the control_management packaccess parameter in SPFILE, its failing . Is it because
of the database edition ?
3) Any work around in Standard Edition ONE to make the performance tab and tuning pack available with licensing ?Pl identify which specific version of 11g.
I believe the answers are in the docs -
http://download.oracle.com/docs/cd/E11882_01/license.112/e10594/editions.htm#CIHBAEID
http://download.oracle.com/docs/cd/E11882_01/license.112/e10594/options.htm#autoId28
HTH
Srini -
Problem to create database with RAC 10g with Oracle Standard Edition
I install CRS (Cluster Ready Services), and then install Oracle 10g database. Also, I install patches 10.1.0.4. When I create database with ASM storage give me a error to create files database.
Any idea?
Thanks
Edu AlcañizYou were right.
http://download-uk.oracle.com/docs/html/B14203_05/intro.htm#sthref44
under the section
"1.3 Configuration Tasks for Oracle Clusterware and Oracle Real Application Clusters"
"... If you use RAC on Oracle Database 10g Standard Edition, then you must use ASM."
Thanks, problem solved.
- Vegard -
Problems upgrading ver 9 developer to ver 10 standard edition.
We upgraded ColdFusion ver 9 Developer Edition to ColdFusion ver 10 Standard Edition. The CF Administrator reports that it is running Coldfusion 10 Standard edition and the correct serial number, but is still running in developer mode.
I called Adobe Support, but I could not make the support agent I spoke with understand that ColdFusion is an Adobe product. She kept asking if it was for Acrobat or Illustrator. Once she understood that ColdFusion was the Adobe product, she wanted to know which opperating system I was using. She would not accept Windows Server 2003 R2 SP 2, and insisted the correct answers were XP, 7 or 8.
I finally entered the ver 10 serial number in the "new serial number" box in CF Administrator, and rebooted the server, even though the Administrator was already reporting the correct ver. 10 serial number. That seems to have solved the problem.I was wrong. Re-entering the version 10 serial number merely resets the IP addresses that are allowed to access the Coldfusion Server. The serial number shown in the \ColdFusion10\cfsuion\lib\license.properties file is correct for ColdFusion 10 Standard, however there is no previous serial number shown,. (Since this was an installation of a full retail version of CF 10, not an upgrade, that may be normal?) I tried stopping the ColdFusion services, deleting the "previous_sn=" line and restarting, but I am still running in developer mode.
The drives on our old server are dying so we really need to bring this new server online. Can anyone help? -
Oracle 10g standard edition installation and problems with logon (help pls)
I installed 10g standard edition, however I got the following two messages during installation:
1) The host IP address can not be determined
2) Missing or Invalid password ...
Eventhough I unlocked all the passwords during install (tiger/scott, sys, system, etc) I can not logon now with none of them ("invalid username/password: logon denied"). I also get "ORA-12170: TNS: Connect Timeout Occured" when trying to lgoon with my email addr and password used for installation.
Please help, this is frustrating. Thx!
P.S> Do I need to input anything in the "Host String" box ?Chances are
1) you are using WIndows of some sort;
2) your machine does not have a static IP address (you are using DHCP);
3) you have not installed the loopback adapter, as desrcibed in the installation document.
You may want to review the Oracle Database 10g (release whichever you are using) "Installation Guide" manual for your operating system again. I think you missed a step. -
Report painter performance problem...
I have a client which runs a report group consists of 14 reports... When we run this program... It takes about 20 minutes to get results... I was assigned to optimize this report...
This is what I've done so far
(this is a SAP generated program)...
1. I've checked the tables that the program are using... (a customized table with more than 20,000 entries and many others)
2. I've created secondary indexes to the main customized table with (20,000) entries - It improves the performance a bit(results about 18 minutes)...
3. I divided the report group by 4... 3 reports each report group... It greatly improves the performance... (but this is not what the client wants)...
4. I've read an article about report group performance that it is a bug.
(sap support recognized the fact that we are dealing with a bug in the sap standard functionality)
http://it.toolbox.com/blogs/sap-on-db2/sap-report-painter-performance-problem-26000
Anyone have the same problem as mine?
Edited by: christopher mancuyas on Sep 8, 2008 9:32 AM
Edited by: christopher mancuyas on Sep 9, 2008 5:39 AMReport painter/Writer always creates a prerformance issue.i never preffred them since i have a option with Zreport
now you can do only one thing put more checks on selection-screen for filtering the data.i think thats the only way.
Amit. -
URGENT------MB5B : PERFORMANCE PROBLEM
Hi,
We are getting the time out error while running the transaction MB5B. We have posted the same to SAP global support for further analysis, and SAP revrted with note 1005901 to review.
The note consists of creating the Z table and some Z programs to execute the MB5B without time out error, and SAP has not provided what type logic has to be written and how we can be addressed this.
Could any one suggest us how can we proceed further.
Note as been attached for reference.
Note 1005901 - MB5B: Performance problems
Note Language: English Version: 3 Validity: Valid from 05.12.2006
Summary
Symptom
o The user starts transaction MB5B, or the respective report
RM07MLBD, for a very large number of materials or for all materials
in a plant.
o The transaction terminates with the ABAP runtime error
DBIF_RSQL_INVALID_RSQL.
o The transaction runtime is very long and it terminates with the
ABAP runtime error TIME_OUT.
o During the runtime of transaction MB5B, goods movements are posted
in parallel:
- The results of transaction MB5B are incorrect.
- Each run of transaction MB5B returns different results for the
same combination of "material + plant".
More Terms
MB5B, RM07MLBD, runtime, performance, short dump
Cause and Prerequisites
The DBIF_RSQL_INVALID_RSQL runtime error may occur if you enter too many
individual material numbers in the selection screen for the database
selection.
The runtime is long because of the way report RM07MLBD works. It reads the
stocks and values from the material masters first, then the MM documents
and, in "Valuated Stock" mode, it then reads the respective FI documents.
If there are many MM and FI documents in the system, the runtimes can be
very long.
If goods movements are posted during the runtime of transaction MB5B for
materials that should also be processed by transaction MB5B, transaction
MB5B may return incorrect results.
Example: Transaction MB5B should process 100 materials with 10,000 MM
documents each. The system takes approximately 1 second to read the
material master data and it takes approximately 1 hour to read the MM and
FI documents. A goods movement for a material to be processed is posted
approximately 10 minutes after you start transaction MB5B. The stock for
this material before this posting has already been determined. The new MM
document is also read, however. The stock read before the posting is used
as the basis for calculating the stocks for the start and end date.
If you execute transaction MB5B during a time when no goods movements are
posted, these incorrect results do not occur.
Solution
The SAP standard release does not include a solution that allows you to
process mass data using transaction MB5B. The requirements for transaction
MB5B are very customer-specific. To allow for these customer-specific
requirements, we provide the following proposed implementation:
Implementation proposal:
o You should call transaction MB5B for only one "material + plant"
combination at a time.
o The list outputs for each of these runs are collected and at the
end of the processing they are prepared for a large list output.
You need three reports and one database table for this function. You can
store the lists in the INDX cluster table.
o Define work database table ZZ_MB5B with the following fields:
- Material number
- Plant
- Valuation area
- Key field for INDX cluster table
o The size category of the table should be based on the number of
entries in material valuation table MBEW.
Report ZZ_MB5B_PREPARE
In the first step, this report deletes all existing entries from the
ZZ_MB5B work table and the INDX cluster table from the last mass data
processing run of transaction MB5B.
o The ZZ_MB5B work table is filled in accordance with the selected
mode of transaction MB5B:
- Stock type mode = Valuated stock
- Include one entry in work table ZZ_MB5B for every "material +
valuation area" combination from table MBEW.
o Other modes:
- Include one entry in work table ZZ_MB5B for every "material +
plant" combination from table MARC
Furthermore, the new entries in work table ZZ_MB5B are assigned a unique
22-character string that later serves as a key term for cluster table INDX.
Report ZZ_MB5B_MONITOR
This report reads the entries sequentially in work table ZZ_MB5B. Depending
on the mode of transaction MB5B, a lock is executed as follows:
o Stock type mode = Valuated stock
For every "material + valuation area" combination, the system
determines all "material + plant" combinations. All determined
"material + plant" combinations are locked.
o Other modes:
- Every "material + plant" combination is locked.
- The entries from the ZZ_MB5B work table can be processed as
follows only if they have been locked successfully.
- Start report RM07MLBD for the current "Material + plant"
combination, or "material + valuation area" combination,
depending on the required mode.
- The list created is stored with the generated key term in the
INDX cluster table.
- The current entry is deleted from the ZZ_MB5B work table.
- Database updates are executed with COMMIT WORK AND WAIT.
- The lock is released.
- The system reads the next entry in the ZZ_MB5B work table.
Application
- The lock ensures that no goods movements can be posted during
the runtime of the RM07MLBD report for the "material + Plant"
combination to be processed.
- You can start several instances of this report at the same
time. This method ensures that all "material + plant"
combinations can be processed at the same time.
- The system takes just a few seconds to process a "material +
Plant" combination so there is just minimum disruption to
production operation.
- This report is started until there are no more entries in the
ZZ_MB5B work table.
- If the report terminates or is interrupted, it can be started
again at any time.
Report ZZ_MB5B_PRINT
You can use this report when all combinations of "material + plant", or
"material + valuation area" from the ZZ_MB5B work table have been
processed. The report reads the saved lists from the INDX cluster table and
adds these individual lists to a complete list output.
Estimated implementation effort
An experienced ABAP programmer requires an estimated three to five days to
create the ZZ_MB5B work table and these three reports. You can find a
similar program as an example in Note 32236: MBMSSQUA.
If you need support during the implementation, contact your SAP consultant.
Header Data
Release Status: Released for Customer
Released on: 05.12.2006 16:14:11
Priority: Recommendations/additional info
Category: Consulting
Main Component MM-IM-GF-REP IM Reporting (no LIS)
The note is not release-dependent.
Thanks in advance.
Edited by: Neliea on Jan 9, 2008 10:38 AM
Edited by: Neliea on Jan 9, 2008 10:39 AMbefore you try any of this try working with database-hints as described in note 921165, 902157, 918992
-
Performance problem when printing - table TSPEVJOB big size
Hi,
in a SAP ERP system there is performance problem when printing because table TSPEVJOB has many millions of entries.
This table has reached a considerable size (about 90 Gb); db size is 200 Gb.
Standard reports have been scheduled correctly and kernel of SAP ERP system is updated.
I've tried to scheduled report RSPO1041 (instead of RSPO0041) to reduce entries but it is not possible run it during normal operations because it locks the printing system.
Do you know why this table has increased ?
Are there any reports to clean this table or other methods ?
Thanks.
Maurizio ManeraDear,
Please see the Note 706478 - Preventing Basis tables from increasing considerably and
Note 195157 - Application log: Deletion of logs.
Note 1566659 - Table TSPEVJOB is filled excessively
Note 1293472 - Spool work process hangs on semaphore 43
Note 1362343 - Deadlocks in table TSP02 and TSPEVJOB
Note 996455 - Deadlocks on TSP02 or TSPEVJOB when you delete
For more information see the below link as,
http://www.sqlservercurry.com/2008/04/how-to-delete-records-from-large-table.html
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=91179
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=83525
If any doubts Let me know.
Thomas
Edited by: thomas_raja on Aug 7, 2011 12:29 PM -
Ann: JDBInsight 1.1 Standard Edition Now Shipping
Inspired (http://www.jinspired.com) today announced the availability of JDBInsight
1.1 Standard Edition. This release includes new visualizations that set a new the
standard for ease of use and graphical communication. JDBInsight is the only J2EE
performance profiler to offer low runtime overheads as little as 2%.
Check out the “Visualize the Invisible“ Tour (http://www.jinspired.com/products/jdbinsight/tour.html
Press Release
====================
Dublin, Ireland—May 15, 2002— Inspired Limited today announced the availability of
JDBInsight 1.1 Standard Edition. The new version includes new visualizations that
set the standard in terms of ease of use and graphical communication. JDBInsight
is the only J2EE performance profiler to offer low runtime overheads as little as
2%. No other profiler or performance-monitoring tool on the market provides this
low level of intrusion while at the same time providing detailed transactional analysis.
JDBInsight analyses the access of enterprise data by J2EE™ client-, web-, and bean
containers. The analysis can encompass transaction executions, spanning multiple
J2EE™ containers. JDBInsight captures timing and execution information for enterprise
data accessed by Servlets, JavaServer Pages, Session- and Entity Beans using JDBC™
or an Entity Bean using a container's persistence engine.
Five Reasons For Choosing JDBInsight to Target J2EE™ Performance Problems
1. Minimal Performance Overhead - During performance test runs of ECperf, JDBInsight
introduced as little as 2% overhead to the runtime. Similar runs with open source
solutions showed 50% degradation in the throughput figures. Using a Java profiler
would result in an even greater degradation, rendering any profiling information
useless.
2. Early Detection - Unlike most other J2EE performance assurance offerings, JDBInsight
enables development teams to target performance from day one. Since most tuning efforts
result in changes to application design and code, it is important this is addressed
early rather than late. JDBInsight enables project management to control development
time and hardware costs.
3. Targets Biggest Bottleneck - Java profilers in most cases cannot identify application
bottlenecks even with application server class and package filtering. The information
that is provided is usually incomplete and without context. JDBInsight focuses specifically
on the expensive interaction between applications and databases.
4. Visualizes the Invisible - With the increasing usage of persistence frameworks
in J2EE™ application development, architects and developers find it impossible to
determine what database interactions are occurring. One of the biggest problems is
that the inner workings are encapsulated, thus testers are limited to black box testing,
slowing down development. JDBInsight provides a visualization of the transactional
nature of a J2EE™ application, unmatched by any other similar offering.
5. Application Server and Database Independent - JDBInsight allows developers to
identify poorly performing SQL, independent of database and trace tools. J2EE™ developers
need not configure a database for tracing, nor spend time learning different tools
for extracting traces. JDBInsight goes far beyond the sophistication of current database
tracing tools, by compacting transaction history trees and providing detailed statistics
at various levels of detail. Developers using different application servers also
benefit from the fact that the product is application server independent.
Raves and Reviews
“[JDBInsight] will be a great help to us in discovering the right path to follow
for optimal J2EE performance”
Senior Engineer, CRM Product
“JDBInsight is a powerful tool for understanding and optimizing J2EE applications
using JDBC. It collects a wealth of data about your running application. It then
presents these data in a GUI that lets the user get in and see what the application
is doing in fascinating and insightful ways. To my knowledge, the product is currently
unique in the market”
Technical Architect, CORBA/J2EE Application Server Product
“JDBInsight was fundamental in providing us with a detailed understanding as to how
our container generated and managed persistence.”
J2EE Consultant, Leading Software Corporation
"I really appreciate [JDBInsight]. Very easy to install (server and GUI). The GUI
is very nice and user friendly. Excellent work !!!"
Project Manager, European Financial Institution
"The product looks really impressive"
Senior Software Developer, Leading Software Design Consultancy Company
“With JDBInsight we were able to perform detailed tests based upon our domain, thus
we were able to make decisions in respect of JDBC drivers.”
Technical Lead, Telecom Product
“This tool might be a very good jewel in our application server product.”
Senior CORBA Consultant, Leading Software CorporationI have used OMWB with 8.1.7 standard edition. It should work with 8.1.6, try downloading and play with it.
null -
Performance problem querying multiple CLOBS
We are running Oracle 8.1.6 Standard Edition on Sun E420r, 2 X 450Mhz processors, 2 Gb memory
Solaris 7. I have created an Oracle Text indexes on several columns in a large table, including varchar2 and CLOB. I am simulating search engine queries where the user chooses to find matches on the exact phrase, all of the words (AND) and any of the words (OR). I am hitting performance problems when querying on multiple CLOBs using the OR, e.g.
select count(*) from articles
where contains (abstract , 'matter OR dark OR detection') > 0
or contains (subject , 'matter OR dark OR detection') > 0
Columns abstract and subject are CLOBs. However, this query works fine for AND;
select count(*) from articles
where contains (abstract , 'matter AND dark AND detection') > 0
or contains (subject , 'matter AND dark AND detection') > 0
The explain plan gives a cost of 2157 for OR and 14.3 for AND.
I realise that multiple contains are not a good thing, but the AND returns sub-second, and the OR is taking minutes! The indexes are created thus:
create index article_abstract_search on article(abstract)
INDEXTYPE IS ctxsys.context parameters ('STORAGE mystore memory 52428800');
The data and index tables are on separate tablespaces.
Can anyone suggest what is going on here, and any alternatives?
Many thanks,
Geoff RobinsonThanks for your reply, Omar.
I have read the performance FAQ already, and it points out single CONTAINS clauses are preferred, but I need to check 2 columns. Also, I don't just want a count(*), I will need to select field values. As you can see from my 2 queries, the first has multiple CLOB columns using OR, and the second AND, with the second taking that much longer. Even with only a single CONTAINS, the cost estimate is 5 times more for OR than for AND.
Add an extra CONTAINS and it becomes 300 times more costly!
The root table is 3 million rows, the 2 token tables have 6.5 and 3 million rows respectively. All tables have been fully analyzed.
Regards
Geoff -
Log File Issue In SQL server 2005 standard Edition
We have database of size 375GB .The data file has 80 GB free space within .When trying to rebuild the index we had 450 GB free space on the disk where Log file is residing.The rebuild index activity failed due to space issue.added more space and got the
job done successfully
The Log file has grow up to 611GB to complete the rebuild index.
version :SQL server 2005 Standard Edition .Is ther a way to estimate the space required for rebuild index in this version.
I am aware we notmaly allocate 1.5 times of data file.But in this case It was totaly wrong.
Any suggestion with examples would be appreciated.
RaghuOK, there's a few things here.
Can you outline for everybody the recovery model you are using, the frequency with which you take full, differential and transaction log backups.
Are you selectively rebuilding your indexes or are you rebuilding everything?
How often are you doing this? Do you need to?
There are some great resources on automated index maintenance, check out
this post by Kendra Little.
Depending on your recovery point objectives I would expect a production database to be in the full recovery mode and as part of this you need to be taking regular log backups otherwise your log file will just continue to grow. By taking a log backup it will
clear out information from inactive VLF's and therefore allow SQL Server to write back to those VLF's rather than having to grow the log file. This is a simplified version of events, there are caveats.
A VLF will be marked as active if it still has an open transaction in it or there is a HA option that still requires that data to be available as that data has not been copied to another node yet.
Most customers that I see take transaction log backups every 15 - 30 minutes, but this really does depend upon how much data your company can afford to lose. That's another discussion for another day.
Make sure that you take a transaction log backup prior to your job that does your index rebuilds (hopefully a smart job not a sledge hammer job).
As mentioned previously swapping to bulk logged can help to reduce the size of the amount of information logged during index rebuilds. If you do this make sure to swap back into the full recovery model straight after and perform a full backup. There are
problems with the ability to do point in time restores whilst in the bulk logged recovery model, so you need to reduce the amount of time you use it.
Really you also need to look at how your indexes are created does the design of them lead to them being fragmented on a regular basis? Are they being used? Are there better indexes out there that can help performance?
Hopefully that should put you on the right track.
If you find this helpful, please mark the post as helpful,
If you think this solves the problem, please propose or mark it an an answer.
Please provide details on your SQL Server environment such as version and edition, also DDL statements for tables when posting T-SQL issues
Richard Douglas
My Blog: Http://SQL.RichardDouglas.co.uk
Twitter: @SQLRich
Maybe you are looking for
-
Human task notification after due date/expiration date/...
I would like to set a due date (or expiration date) to Human task, and set some notification before this time. Ok, it seemed very easy: - i set due date to 5 minutes in human task -> Deadlines - i set notifications in human task -> notification and i
-
Should I partition on the Mac?
I've got the full CS4 Master Suite. I'm just about to get a new iMac with 2TB internal HD, 16GB RAM. Can anyone tell me the best way to set up the iMac's HD to get the best from Premiere Pro? I've just read Harm Millaar's interesting post: Storage ru
-
Can't install window xp (disk error)
i want to install my window. but after i install it and it say disk error and tell me to press any key to restart.But i can't press any key,the keyboard is frozen. what should i do.any advice?
-
Jre1.6.0_11-c-l.msi
Hi Im runing Vista and i am unable to install the latest version of Java, i am also ot able to uninstall vversion 6 update 11 either cos of this problem. I am also running into problems as i can ot do things as this version is not working. Help and a
-
After upgrade, can't open iwork
I was reupgrade again. But it still cant open.