After table partition, some transactions hang
I am running SAP 4.6 and Oracle 10.2
I recently used online reorg to partition table DFKKOP (over 100 million records) into two parts. One partition was for "closed" orders, and the other was for "open" orders (less than 2% of the rows).
Since most of the queries against this table were against open orders, access times improved dramatically (one query went from 20 minutes to 20 seconds).
Unfortunately, while it was on a test system, our developers discovered that this caused transactions FPL9 and CIC0 to hang. To prove that it was the partition causing the hang, we used our "snapback" ability to return the database to the point in time prior to the partition (we did this twice), and the problem disappeared. Other than these hangs, the database seemed to work normally.
Does anyone have an idea why this might be happening? BTW - I had "row movement enabled", so that's not it. I have found nothing in SAP notes or on this forum.
Thanks.
>
Sandeep Pasumarti wrote:
> i am also thinking of partitioning a large table - custom table - that we use instead of glpca.. it is 40gb..would you happen to have a how-to approach in doing that? how did you handle the abap dictionary side? - or do you just do on oracle...thanks!
>
> ps i was thinking on doing something fiscal year.. or acct number range...not sure how i could partition FY for years moving forward...meaing fy 2002 is one partition, 2003 is another, etc...
Hi Sandeep,
Partitioning is a really cool feature - but it must be valued against your primary goal you want to achieve.
it has an influence WICH partitioning method you have to use AND wich partitioning keys you have to find.
If you see it like a triangle you can have these goals:
Performance
Administration Availability
you will hardly achieve all three (wich means you generate some overhead for the other goals).
It may that you are in a corner of one goal - see the location of (*)
Our goal was improve write performance (in the 1st place):
We did it on a SAP BW system on our own Z tables wich were growing bigger and bigger ( several hundreds of GBytes). As we debugged how BW itself managed the partitions we used this and a little ABAP to maintain the partitions for our own tables.
Our's was range partitioning with an Identifier (INT4) used as a sequence to flagg present and historical calculations as a partition key. We could now TRUNCATE the partitions instead of DELETE (the UNDO generation blew up our batch time window for computations of figures).
And yes we benefit from partion pruning - as far you had a lot of full table scans wich are now turned into partition scans. Index accesses benefit not that muxch from paritioning - but it reallly depends on your application HOW it uses the data.
All your queries should use the partition key as a criteria (in your case FISCYEAR) - otherwise you make things worse.
Be aware that you have to use the right data type (imagine a range partiton on a Varchar2 column
like the NUMBER data type in SAP tables) and evaluate it carefully against the partiton type you choose.
There are some pitfalls like moving partition keys (wich you have to avoid ).
There is the overhead of creating / dropping/ merging partitions and who is responsible for it (in our case
the application and not an SAP admin).
Maintain all indexes as local
You can use BRSPACE to partition the table , with your 40 Gybte SE14 could be also used as you can define the partitions (hopefully you are at least on a 6.20 system).
For our system it was essential
1) to fully understand the concept of partitioining (seems obvious, but guess where things most fail)
2 that it is supported by standard SAP tools (as in SE14 or BRSPACE)
3) to write an interface to handle the partitioning without involving a DBA
(imagine you have a process chain that uses your partition table and you have to provide a new partition while running the chain).
We run it now for more than a year and it was a good decision to move to partitioning.
Now we will extend the feature and use it for archivals on a partition basis.
Bye
yk
Similar Messages
-
Hi, all transactions codes are saved in which table?some transaction codes?
hi,
all transactions codes are saved in which table?i want some transaction codes?All transaction codes are stored in table TSTC. Their texts are displayed in TSTCT.
Here are some T-CODE's..
OSS1 SAP Online Service System
OY19 Compare Tables
S001 ABAP Development Workbench
S002 System Administration.
SA38 Execute a program.
SCAT Computer Aided Test Tool
SCU0 Compare Tables
SE01 Old Transport & Corrections screen
SE09 Workbench Organizer
SE10 Customizing Organizer
SE10 Customizing organizer requests for user (To release for transport enter user name, press Enter. Select changed object and select ReleaseSE10 New Transport & Correction screen
SE11 ABAP/4 Dictionary Maintenance SE12 ABAP/4 Dictionary Display SE13 Maintain Technical Settings (Tables)
SE11 ABAP/4 Dictionary.
SE12 Dictionary: Initial Screen enter object name
SE13 Access tables in ABAP/4 Dictionary.
SE14 ABAP/4 Dictionary: Database Utility.
SE14 Utilities for Dictionary Tables
SE15 ABAP/4 Repository Information System
SE15 ABAP/4 Repository Information System.
SE16 Data Browser
SE16 Data Browser: Initial Screen.
SE16 Display table contents
SE17 General Table Display
SE30 ABAP/4 Runtime Analysis
SE30 ABAP/4 Runtime Analysis: Initial Screen.
SE30 Run Time Analysis (press Tips and Tricks button for good stuff)
SE32 ABAP/4 Text Element Maintenance
SE35 ABAP/4 Dialog Modules
SE36 ABAP/4: Logical Databases
SE37 ABAP/4 Function Library.
SE37 ABAP/4 Function Modules
SE38 ABAP Editor
SE38 ABAP/4 Editor.
SE38 ABAP/4 Program Development
SE39 Splitscreen Editor: Program Compare
SE41 Menu Painter
SE43 Maintain Area Menu
SE51 Screen Painter
SE51 Screen Painter: Initial Screen.
SE54 Generate View Maintenance Module
SE61 R/3 Documentation
SE62 Industry utilities
SE63 Translate Short/Long Text.
SE63 Translation
SE64 Terminology
SE65 R/3 documents. Short text statistics SE66 R/3 Documentation Statistics (Test!)
SE68 Translation Administration
SE71 SAPscript layout set
SE71 SAPscript Layouts Create/Change
SE72 SAPscript styles
SE73 SAPscript font maintenance (revised)
SE74 SAPscript format conversion
SE75 SAPscript Settings
SE76 SAPscript Translation Layout Sets
SE77 SAPscript Translation Styles
SE80 ABAP/4 Development Workbench
SE80 Repository Browser: Initial Screen.
SE81 SAP Application Hierarchy
SE82 Customer Application Hierarchy
SE84 ABAP/4 Repository Information System
SE85 ABAP/4 Dictionary Information System
SE86 ABAP/4 Repository Information System
SE87 Data Modeler Information System
SE88 Development Coordination Info System
SE91 Maintain Messages
SE92 Maintain system log messages
SE93 Maintain Transaction Codes
SE93 Maintain Transaction.
SEU Object Browser
SHD0 Transaction variant maintenance
SM04 Overview of Users (cancel/delete sessions)
SM04 Overview of Users.
SM12 Deletion of lock entries (in the event you have you are locked out).
SM12 Lock table entries (unlock locked tables)
SM21 View the system log, very useful when you get a short dump. Provides much more info than short dump
SM30 Maintain Table Views.
SM31 Table Maintenance
SM32 Table maintenance
SM35 View Batch Input Sessions
SM37 View background jobs
SM50 Process Overview.
SM51 Delete jobs from system (BDC)
SM62 Display/Maintain events in SAP, also use function BP_EVENT_RAISE
SMEN Display the menu path to get to a transaction
SMOD/CMOD Transactions for processing/editing/activating new customer enhancements.
SNRO Object browser for number range maintenance.
SPRO Start SAP IMG (Implementation Guide).
SQ00 ABAP/4 Query: Start Queries
SQ01 ABAP/4 Query: Maintain Queries
SQ02 ABAP/4 Query: Maintain Funct. Areas
SQ03 ABAP/4 Query: Maintain User Groups
SQ07 ABAP/4 Query: Language Comparison
ST05 Trace SQL Database Requests.
SU53 Display Authorization Values for User.
Human Resources
PA03 Change Payroll control record
PA20 Display PA Infotypes
PA30 Create/Change PA Infotypes
PP02 Quick Entry for PD object creation
PU00 Delete PA infotypes for an employee. Will not be able to delete an infotype if there is cluster data assigned to the employee.
Sales and Distribution (SD)
OLSD Config for SD. Use Tools-Data Transfer-Conditions to setup SAP supplied BDC to load pricing data
VA01 Create Sales/Returns Order Initial Screen
VB21 Transaction for Volume Lease Purchases (done as a sales deal)
VK15 Transaction used to enter multiple sales conditions (most will be entered here)
VL02 Deliveries
SAP Office
SO00 send a note through SAP, can be sent to Internet, X400, etc
Financial Accounting (FI)
FGRP Report Writer screen
FM12 View blocked documents by user
FST2 Insert language specific name for G/L account.
FST3 Display G/L account name.
KEA0 Maintain operating concern.
KEKE Activate CO-PA.
KEKK Assign operating concern.
KL04 Delete activity type.
KS04 Delete a cost centre.
KSH2 Change cost centre group delete.
OBR2 Deletion program for customers, vendors, G/L accounts.
OKC5 Cost element/cost element group deletion.
OKE1 Delete transaction data.
OKE2 Delete a profit centre.
OKI1 Determine Activity Number: Activity Types (Assignment of material number/service to activity type)
OMZ1 Definition of partner roles.
OMZ2 Language dependent key reassignment for partner roles.
Material Management (MM)
MM06 Flag material for deletion.
OLMS materials management configuration menu, most of the stuff under this menu is not under the implementation guide
MM configuration transactions
OLMB Inventory management/Physical Inventory
OLMD MM Consumption-Based Planning
OLME MM Purchasing
OLML Warehouse Management
OLMR Invoice Verification
OLMS Material Master data
OLMW MM Valuation/Account Assignment
Configuration related
OLE OLE demo transaction
OLI0 C Plant Maintenance Master Data
OLI1 Set Up INVCO for Material Movements
OLI8 Set Up SIS for Deliveries
OLIA C Maintenance Processing
OLIP C Plant Maintenance Planning
OLIQ New set-up of QM info system
OLIX Set Up Copying/Deleting of Versions
OLIY Set Up Deletion of SIS/Inter.Storage
OLIZ Stat Set Up INVCO: Invoice Verify
OLM2 Customizing: Volume-Based Rebates
OLMB C RM-MAT Inventory Management Menu
OLMD C RM-MAT MRP Menu
OLME C MM Menu: Purchasing
OLML C MM Menu for Warehouse Management
OLMR C RM-MAT Menu: Invoice Verification
OLMS C RM-MAT Master Data Menu
OLMW C RM-MAT Valuation/Acct. Asset. Menu
OLPA SOP Configuration
OLPE Sales order value
OLPF SPRO Start SAP IMG (Implementation Guide).
OLPK Customizing for capacity planning
OLPR Project System Options
OLPS Customizing Basic Data
OLPV Customizing: Std. Value Calculation
OLQB C QM QM in Procurement
OLQI Analysis OLVD C SD Shipping Menu
OLVF C SD Billing Menu
OLQM Customizing QM Quality Notifications
OLQS C QM Menu Basic Data
OLQW C QM Inspection Management
OLQZ Quality Certificates
OLS1 Customizing for Rebates
OLSD Customizing: SD
OLVA C SD Sales Menu
OLVS C SD Menu for Master Data
Regards,
Pavan -
What happens to Existing index after table partition and created with local index
Hi guys,
desc part id number, name varchar2(100), salary number
In an existing table PART i am adding 1 more column DATASEQ NUMBER. i am asked to partition the table part based on dataseq.now the table is created with this logic
create table part( id number, name varchar2(100), salary number, DATASEQ number) partition by list(dataseq) (partition PART_INITIAL values (1));
Suggestionn required. since the table is partitioned based on DATASEQ i am asked to add local index on dataseq. i have added local index to dataseq create index idx on part(dataseq) LOCAL; Now my question is already there is existing index is for the column ID and salary.
1) IDX for dataseq is created locally so that it will have partition on each partition on the main table. Please tell me what happens to the existing index on the column ID and salary.. will it again created in local?
Please suggest
SHi,
first of all, in reality "partition a table" means create a new table a migrate existing data there (although theoretically you can use dbms_redefinition to partition an existing table -- however, it's just doing the same thing behind the scenes). This means that you also get to decide what to do with the indexes -- which indexes will be local, which will be global (you can also re-evaluate some of existing indexes and decide that they're not really needed).
Second of all, the choice of partitioning key looks odd. Partitioning is a data manageability technique more than anything else, so in order to benefit from it you need to find a good partitioning key. A recently added column named "data_seq" doesn't look like a good candidate. Can you provide more details about this column and why it was chosen as a partitioning key?
I suspect that whoever suggested this partitioning scheme is making a huge mistake. A non-partitioned table is much better in all aspects (including manageability and performance) than wrongly partitioned one.
Best regards,
Nikolay -
Urgent : update of table QALS through transaction QA12 after save
Hello Experts ,
I need to update the table field SELHERST of standard table QALS through transaction QA12 after the save button is clicked .
I have implemented the exit QEVA0010 which is triggered after clicking the save button , and have put the update command there , followed by commit work statement .
But it is not updating the table QALS .
Thanks in advance ....When the SAVE button is pressed, SAP execute some checks and launch the update task, if you want that your changes are not overwritten by SAP you need to submit those for execution during update task.
So you need to use instructions like
- PERFORM ON COMMIT
- CALL FUNCTION IN UPDATE TASK
and execute the update in one of these. (Reference [Updates in the R/3 System (BC-CST-UP)|http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCCSTUP/BCCSTUP_PT.pdf])
You may also try to force the update in the main program, declaring a pointer (field symbol) in the calling program data '(SAPMQEVA)QALS' (But that's not very correct)
Regards -
The appearence of tables in some Word 2010 documents changes after KB2880529
I wanted to alert you that, since our company has applied
KB2880529, some users are reporting Word 2010 documents (docx) having their appearence changed.
More precisely, the issue concern the table inserted in the Word document : They are all messed up. For exemple, the begining of the table can look ok, then a few lines of the table are
badly miss aligned (like moved 2 cm to the right), then you've got a few normal lines, then again several bads, and so on.
Also, some of the cells, that were in the last column of the table, may appear half outside of the table.
And the worse part is that, even if you take some time to manualy fix the table, when you save the document and re-open it, everything is bad again, and exactly as it was before... so basicaly
the save doesn't work for the tables (it work if you change some text in the table, but not for the table itself (size of the columns, location of the columns, so on)).
I can't profite any file because they are of a very sensitive nature and can't leave our company, even if I remove most of the content, our IT security doesn't allow it.
Anyway :
- we are absolutly sure it's
KB2880529 that does that because when we uninstall the KB, and re-open the document, it look normal again.
- it seems to concern only documents that were created using some old Word 2003 templates some time ago, and then opened in Word 2010. As far as I know it doesn't happens on 100% Word 2010
documents.
So, we are currently doing a package installed by SCCM 2012 in order to uninstall it on all the PC which received it a few days ago.
Let's hope you'll correct that issue... and if possible that, in a near futur, you'll had a feature in SCCM 2012 to allow us to unintall KBs, like it was possible with WSUS.I'm sure there are some ways to fix the files one by one. Indeed, our tests indicate that it's possible, for exemple, to open them in "LibreOffice 4.2" (a fork of OpenOffice), which displays them correctly, and then save the file in .doc and then use
Office 2010 to open this .doc and save it in .docx.
But what I've forget to indicate is that the issue touch probably thousands of documents that have been placed in a "document management application" (I'm not sure how to correctly translate) over the years, documents that may be hundreds of users need to
look at, for reference, from time to time... but basicaly the documents are "frozen" / archived, they must not be edited by anybody.
I very much doubt you can expect MS to not apply updates, including via the next Service Pack, just because some tables in some of your documents are corrupt. Besides which, the same problem will quite likely resurface when you next upgrade to a newer version
of Office. Ultimately, someone is going to have to check out all the documents with tables and verify their content. The scope of that project might be narrowed down after you've checked a few documents and found some common features between those with the
corrupt tables. It's easy enough to write a macro to test the files to see which ones have tables. That can serve as the first step in narrowing the scope of the project. It's also possible have the macro that repairs the files restore their
original time/date stamps if that's important.
An entirely different approach would be to temporarily uninstall the update on one PC. Then use that PC to convert all the documents to PDF. Then use the PDFs in the "document management application". Since the documents "must not be edited by anybody" the
PDF format is inherently more secure in that regard and can have security attributes set to prevent printing and/or content copying. The PDF format is also impervious to Word's tendency to change document layouts whenever you do little things like updating
printers or changing between doc & docx formats.
Cheers
Paul Edstein
[MS MVP - Word] -
Fact and dimension table partition
My team is implementing new data-warehouse. I would like to know that when should we plan to do partition of fact and dimension table, before data comes in or after?
Hi,
It is recommended to partition Fact table (Where we will have huge data). Automate the partition so that each day it will create a new partition to hold latest data (Split the previous partition into 2). Best practice is to create partition on transaction
timestamps so load the incremental data into a empty table called (Table_IN) and then Switch that data into main table (Table). Make sure your tables (Table and Table_IN) should be on one file group.
Refer below content for detailed info
Designing and Administrating Partitions in SQL Server 2012
A popular method of better managing large and active tables and indexes is the use of partitioning. Partitioning is a feature for segregating I/O workload within
SQL Server database so that I/O can be better balanced against available I/O subsystems while providing better user response time, lower I/O latency, and faster backups and recovery. By partitioning tables and indexes across multiple filegroups, data retrieval
and management is much quicker because only subsets of the data are used, meanwhile ensuring that the integrity of the database as a whole remains intact.
Tip
Partitioning is typically used for administrative or certain I/O performance scenarios. However, partitioning can also speed up some queries by enabling
lock escalation to a single partition, rather than to an entire table. You must allow lock escalation to move up to the partition level by setting it with either the Lock Escalation option of Database Options page in SSMS or by using the LOCK_ESCALATION option
of the ALTER TABLE statement.
After a table or index is partitioned, data is stored horizontally across multiple filegroups, so groups of data are mapped to individual partitions. Typical
scenarios for partitioning include large tables that become very difficult to manage, tables that are suffering performance degradation because of excessive I/O or blocking locks, table-centric maintenance processes that exceed the available time for maintenance,
and moving historical data from the active portion of a table to a partition with less activity.
Partitioning tables and indexes warrants a bit of planning before putting them into production. The usual approach to partitioning a table or index follows these
steps:
1. Create
the filegroup(s) and file(s) used to hold the partitions defined by the partitioning scheme.
2. Create
a partition function to map the rows of the table or index to specific partitions based on the values in a specified column. A very common partitioning function is based on the creation date of the record.
3. Create
a partitioning scheme to map the partitions of the partitioned table to the specified filegroup(s) and, thereby, to specific locations on the Windows file system.
4. Create
the table or index (or ALTER an existing table or index) by specifying the partition scheme as the storage location for the partitioned object.
Although Transact-SQL commands are available to perform every step described earlier, the Create Partition Wizard makes the entire process quick and easy through
an intuitive point-and-click interface. The next section provides an overview of using the Create Partition Wizard in SQL Server 2012, and an example later in this section shows the Transact-SQL commands.
Leveraging the Create Partition Wizard to Create Table and Index Partitions
The Create Partition Wizard can be used to divide data in large tables across multiple filegroups to increase performance and can be invoked by right-clicking
any table or index, selecting Storage, and then selecting Create Partition. The first step is to identify which columns to partition by reviewing all the columns available in the Available Partitioning Columns section located on the Select a Partitioning Column
dialog box, as displayed in Figure 3.13. This screen also includes additional options such as the following:
Figure 3.13. Selecting a partitioning column.
The next screen is called Select a Partition Function. This page is used for specifying the partition function where the data will be partitioned. The options
include using an existing partition or creating a new partition. The subsequent page is called New Partition Scheme. Here a DBA will conduct a mapping of the rows selected of tables being partitioned to a desired filegroup. Either a new partition scheme should
be used or a new one needs to be created. The final screen is used for doing the actual mapping. On the Map Partitions page, specify the partitions to be used for each partition and then enter a range for the values of the partitions. The
ranges and settings on the grid include the following:
Note
By opening the Set Boundary Values dialog box, a DBA can set boundary values based on dates (for example, partition everything in a column after a specific
date). The data types are based on dates.
Designing table and index partitions is a DBA task that typically requires a joint effort with the database development team. The DBA must have a strong understanding
of the database, tables, and columns to make the correct choices for partitioning. For more information on partitioning, review Books Online.
Enhancements to Partitioning in SQL Server 2012
SQL Server 2012 now supports as many as 15,000 partitions. When using more than 1,000 partitions, Microsoft recommends that the instance of SQL Server have at
least 16Gb of available memory. This recommendation particularly applies to partitioned indexes, especially those that are not aligned with the base table or with the clustered index of the table. Other Data Manipulation Language statements (DML) and Data
Definition Language statements (DDL) may also run short of memory when processing on a large number of partitions.
Certain DBCC commands may take longer to execute when processing a large number of partitions. On the other hand, a few DBCC commands can be scoped to the partition
level and, if so, can be used to perform their function on a subset of data in the partitioned table.
Queries may also benefit from a new query engine enhancement called partition elimination. SQL Server uses partition enhancement automatically if it is available.
Here’s how it works. Assume a table has four partitions, with all the data for customers whose names begin with R, S, or T in the third partition. If a query’s WHERE clause
filters on customer name looking for ‘System%’, the query engine knows that it needs only to partition three to answer
the request. Thus, it might greatly reduce I/O for that query. On the other hand, some queries might take longer if there are more than 1,000 partitions and the query is not able to perform partition elimination.
Finally, SQL Server 2012 introduces some changes and improvements to the algorithms used to calculate partitioned index statistics. Primarily, SQL Server 2012
samples rows in a partitioned index when it is created or rebuilt, rather than scanning all available rows. This may sometimes result in somewhat different query behavior compared to the same queries running on SQL Server 2012.
Administrating Data Using Partition Switching
Partitioning is useful to access and manage a subset of data while losing none of the integrity of the entire data set. There is one limitation, though. When
a partition is created on an existing table, new data is added to a specific partition or to the default partition if none is specified. That means the default partition might grow unwieldy if it is left unmanaged. (This concept is similar to how a clustered
index needs to be rebuilt from time to time to reestablish its fill factor setting.)
Switching partitions is a fast operation because no physical movement of data takes place. Instead, only the metadata pointers to the physical data are altered.
You can alter partitions using SQL Server Management Studio or with the ALTER TABLE...SWITCH
Transact-SQL statement. Both options enable you to ensure partitions are
well maintained. For example, you can transfer subsets of data between partitions, move tables between partitions, or combine partitions together. Because the ALTER TABLE...SWITCH statement
does not actually move the data, a few prerequisites must be in place:
• Partitions must use the same column when switching between two partitions.
• The source and target table must exist prior to the switch and must be on the same filegroup, along with their corresponding indexes,
index partitions, and indexed view partitions.
• The target partition must exist prior to the switch, and it must be empty, whether adding a table to an existing partitioned table
or moving a partition from one table to another. The same holds true when moving a partitioned table to a nonpartitioned table structure.
• The source and target tables must have the same columns in identical order with the same names, data types, and data type attributes
(length, precision, scale, and nullability). Computed columns must have identical syntax, as well as primary key constraints. The tables must also have the same settings for ANSI_NULLS and QUOTED_IDENTIFIER properties.
Clustered and nonclustered indexes must be identical. ROWGUID properties
and XML schemas must match. Finally, settings for in-row data storage must also be the same.
• The source and target tables must have matching nullability on the partitioning column. Although both NULL and NOT
NULL are supported, NOT
NULL is strongly recommended.
Likewise, the ALTER TABLE...SWITCH statement
will not work under certain circumstances:
• Full-text indexes, XML indexes, and old-fashioned SQL Server rules are not allowed (though CHECK constraints
are allowed).
• Tables in a merge replication scheme are not allowed. Tables in a transactional replication scheme are allowed with special caveats.
Triggers are allowed on tables but must not fire during the switch.
• Indexes on the source and target table must reside on the same partition as the tables themselves.
• Indexed views make partition switching difficult and have a lot of extra rules about how and when they can be switched. Refer to
the SQL Server Books Online if you want to perform partition switching on tables containing indexed views.
• Referential integrity can impact the use of partition switching. First, foreign keys on other tables cannot reference the source
table. If the source table holds the primary key, it cannot have a primary or foreign key relationship with the target table. If the target table holds the foreign key, it cannot have a primary or foreign key relationship with the source table.
In summary, simple tables can easily accommodate partition switching. The more complexity a source or target table exhibits, the more likely that careful planning
and extra work will be required to even make partition switching possible, let alone efficient.
Here’s an example where we create a partitioned table using a previously created partition scheme, called Date_Range_PartScheme1.
We then create a new, nonpartitioned table identical to the partitioned table residing on the same filegroup. We finish up switching the data from the partitioned table into the nonpartitioned table:
CREATE TABLE TransactionHistory_Partn1 (Xn_Hst_ID int, Xn_Type char(10)) ON Date_Range_PartScheme1 (Xn_Hst_ID) ; GO CREATE TABLE TransactionHistory_No_Partn (Xn_Hst_ID int, Xn_Type
char(10)) ON main_filegroup ; GO ALTER TABLE TransactionHistory_Partn1 SWITCH partition1 TO TransactionHistory_No_Partn; GO
The next section shows how to use a more sophisticated, but very popular, approach to partition switching called a sliding
window partition.
Example and Best Practices for Managing Sliding Window Partitions
Assume that our AdventureWorks business is booming. The sales staff, and by extension the AdventureWorks2012 database, is very busy. We noticed over time that
the TransactionHistory table is very active as sales transactions are first entered and are still very active over their first month in the database. But the older the transactions are, the less activity they see. Consequently, we’d like to automatically group
transactions into four partitions per year, basically containing one quarter of the year’s data each, in a rolling partitioning. Any transaction older than one year will be purged or archived.
The answer to a scenario like the preceding one is called a sliding window partition because
we are constantly loading new data in and sliding old data over, eventually to be purged or archived. Before you begin, you must choose either a LEFT partition function window or a RIGHT partition function window:
1. How
data is handled varies according to the choice of LEFT or RIGHT partition function window:
• With a LEFT strategy, partition1 holds the oldest data (Q4 data), partition2 holds data that is 6- to 9-months old (Q3), partition3
holds data that is 3- to 6-months old (Q2), and partition4 holds recent data less than 3-months old.
• With a RIGHT strategy, partition4 holds the holds data (Q4), partition3 holds Q3 data, partition2 holds Q2 data, and partition1
holds recent data.
• Following the best practice, make sure there are empty partitions on both the leading edge (partition0) and trailing edge (partition5)
of the partition.
• RIGHT range functions usually make more sense to most people because it is natural for most people to to start ranges at their lowest
value and work upward from there.
2. Assuming
that a RIGHT partition function windows is used, we first use the SPLIT subclause of the ALTER PARTITION FUNCTIONstatement
to split empty partition5 into two empty partitions, 5 and 6.
3. We
use the SWITCH subclause
of ALTER TABLE to
switch out partition4 to a staging table for archiving or simply to drop and purge the data. Partition4 is now empty.
4. We
can then use MERGE to
combine the empty partitions 4 and 5, so that we’re back to the same number of partitions as when we started. This way, partition3 becomes the new partition4, partition2 becomes the new partition3, and partition1 becomes the new partition2.
5. We
can use SWITCH to
push the new quarter’s data into the spot of partition1.
Tip
Use the $PARTITION system
function to determine where a partition function places values within a range of partitions.
Some best practices to consider for using a slide window partition include the following:
• Load newest data into a heap, and then add indexes after the load is finished. Delete oldest data or, when working with very large
data sets, drop the partition with the oldest data.
• Keep an empty staging partition at the leftmost and rightmost ends of the partition range to ensure that the partitions split when
loading in new data, and merge, after unloading old data, do not cause data movement.
• Do not split or merge a partition already populated with data because this can cause severe locking and explosive log growth.
• Create the load staging table in the same filegroup as the partition you are loading.
• Create the unload staging table in the same filegroup as the partition you are deleting.
• Don’t load a partition until its range boundary is met. For example, don’t create and load a partition meant to hold data that is
one to two months older before the current data has aged one month. Instead, continue to allow the latest partition to accumulate data until the data is ready for a new, full partition.
• Unload one partition at a time.
• The ALTER TABLE...SWITCH statement
issues a schema lock on the entire table. Keep this in mind if regular transactional activity is still going on while a table is being partitioned.
Thanks Shiven:) If Answer is Helpful, Please Vote -
Hi
I am trying to partition a table for performance improvement. I have used 742243 as a reference which is described below.
I have followed the proceedure and created the partitions. But that is not visible in oracle. Could anyone let me know how to approach the problem.
Many thanks for you help
Deb Sircar
Using the database utility (transaction SE14), you can partition a table defined in the DDIC with the RANGE, LIST or HASH methods.
Since the use of transaction SE14 is not obvious, we will explain the partitioning of a table with an example.
Enter the table to be partitioned in transaction SE14, choose "Edit" and choose the "Storage parameter" pushbutton in the subsequent screen.
In the next menu, choose the "For new creation" pushbutton.
Place the cursor on "Table" and choose "Create parameter values" (second button) in the application toolbar.
You can select a template for the parameter values, for example, "Current Database Parameters".
The subsequent screen is similar to the following:
INDEX ORGANIZED
TABLESPACE
INITIAL EXTENT 16
NEXT EXTENT 160
MINIMUM EXTENTS 1
MAXIMUM EXTENTS 300
PCT INCREASE 0
FREELISTS 1
FREELIST GROUPS 1
PCT FREE 10
PCT USED 40
PARTITION BY
You can select the partitioning method in the "PARTITION BY" parameter with the input help.
Depending on the type you select, more parameters will be displayed, for example, the "COLUMN LIST" parameter for RANGE, and "HIGH VALUE" for the displayed partition.
INDEX ORGANIZED
TABLESPACE
INITIAL EXTENT 16
NEXT EXTENT 160
MINIMUM EXTENTS 1
MAXIMUM EXTENTS 300
PCT INCREASE 0
FREELISTS 1
FREELIST GROUPS 1
PCT FREE 10
PCT USED 40
PARTITION BY RANGE
COLUMN LIST
PARTITION
PARTITION NAME
HIGH VALUE
TABLESPACE
INITIAL EXTENT
For each parameter, you can enter one or multiple values. The values are case-sensitive and must be entered depending on the data type (if necessary, enclose them in single quotes (')).
To obtain another entry for a parameter, for example, for PARTITION or HIGH VALUE, you must place the cursor on the parameter and choose "Insert parameter values" (fourth button) in the application toolbar. .......
PCT FREE 10
PCT USED 40
PARTITION BY RANGE
COLUMN LIST
PARTITION
PARTITION NAME
HIGH VALUE
HIGH VALUE
TABLESPACE
INITIAL EXTENT
PARTITION
PARTITION NAME
HIGH VALUE
HIGH VALUE
TABLESPACE
INITIAL EXTENT
If you enter several values for a parameter, the entries must follow the right syntax; for example, if you want to enter three field names in the "COLUMN LIST" parameter, the entry must look as follows:
COLUMN LIST "COL1", "COL2", "COL3"
After you have populated and saved the parameter values, you can create the required table with transaction SE14 in the database by choosing the "Activate and adjust database" pushbutton.
Partitioning of an index is only supported in line with the partitioning of the table (local index). You can specify this option by populating the "PARTITIONED" parameter:
PARTITIONED X
Header Data
Release Status: Released for Customer
Released on: 03.06.2004 15:01:27
Master Language: German
Priority: Recommendations/additional info
Category: Installation information
Primary Component: BC-DB-ORA OracleHi
1. Do you have sqlplus access to the database?
2. What do you mean by "But that is not visible in oracle"? Is the table still non-partitioned on the database? Is the table shown as "active" and "exists in database" in SE14? What does a check -> database object show?
3. Can you please also add a few details about what exactly you wanted to achieve, like table name, partitioning layout, indexes.
With this info we might be able to help you.
Best regards, Michael -
Vertical table partition?
Hi,
our database contains XML documents in 11 languages, one table for each language. Besides, there is meta information about each document which is the same for each language. Therefore, this meta information has been placed in a separate table.
Now to the problem: common database queries search in the documents (Intermedia) as well
in the meta data, what means that there is a join between the table that contains the documents (columns: key, documenttext) and the table that contains the meta information (columns: key and 18 others). Unfortunately after having located documents via Intermedia index, Oracle reads the corresponding data blocks for extracting the key. Because the documents are up to 4K, Oracle reads much information that it does not need. Is it possible to have a vertical partition of the documents table in order to be able to efficiently read only the key information instead of the whole table rows?
Thank You for any hint,
MarkusThanks For your comment and sorry for delay my Internet got down yesterday. Well, As you said throw away existing table structure.. i agree with you but there is a limitation right now. Our application only understand the current structure. Any change in database result to change execution engine. Actually, Our application provide interface to do Ad hoc reporting. we certainly use dimensional modeling but unfortunately our CTO believe existing structure is much better than the dimensional model. it gives the flexibility to load any data like retail, click stream etc and provide reporting on data. Although, the architecture is bad but they we providing analytic to customer....
To cut the long story short, they have assigned me a task to improve its performance. i have already improve it performance by using different feature of oracle like table partitioning sub partitioning, indexing and many more... but we want more... i have observed during the data loading process the Update part took much time. if i could do this any other way it would reduced the overall data loading time.. we extract the file from client server and perform transformation according to business rule and out the result of transformation file in flat file.. finally our application use sql loader to load in flat table then divide it in different table. Some of our existing client have 250 column some are 350 plus...
i think i have explained much but if you have more question on it i will ready to answer them. Please suggest with your experience how could i improve performance of existing system. -
I am planning for table partitioning of existing tables...
database is production 10gR1 i need it to upgrade to 11gR2 and also implement table partitioning. what should be the best way to do so with minimum downtime and zero data loss. Please advice. size of the tables are 20 to 40GB.. source database is in solaris and planning to migrate on linux.my question is for partitioning with minimum downtime with zero data loss.
the scenario is like
I took a export of source table and create a new partitioned table with different name, then import the data into new table, drop the source table and rename the new partitioned table with the source tables's name. Its all fine because there will be very minimum downtime but here we will loss some data. It is because the transactions which are done in the mean time will not be in the exported file. Is there any way another..? I hope you got my question. -
Please advice about table partitioning
Hi.
I need some advice on table partitioning. Here is the problem : My application procesess 7 types of documents (all like invoices). I have created master/detail tables for each type of document. But the are no big difrences between documents, so I was planing to move all of this document types to one master and one detail table and use table partitioning based od document type. This would to simplifay my programming very much... but I am not sure how will this effect on application performance ??? We process a large number of documents (over 300 a day).
So what do you think ?1) If you are using the standard edition, partitioning is not an option. Partitioning is an extra-cost licensing option that is only available to enterprise edition customers.
2) I am having a bit of difficulty reconciling the numbers you are providing with your descriptions. An application that inserts 5 MB of data a day and adds fewer than 2 million rows per year is a pretty small database, particularly to consider partitioning for. 50 active users doesn't seem terribly large. It is far from obvious to me that separate schemas by year provides any sort of benefit overall. OLTP operations should be able to use appropriate indexes to avoid all the read-only data. And a single schema with a single set of tables minimizes the number of hard parses you have to do, the number of queries that have to be rewritten every year to go after new schemas of data, etc
Justin -
EJB transaction hanging over VPN connection
EJB transaction hanging over VPN connection
I am trying to diagnose a problem that only occurs when I am doing testing from home and connected via VPN to work.
We are using WL 10.3. Our clusters are setup to communicate via multicast. We have a stateless session bean that uses many different resources (Sybase DB, other EJBs, Sonic JMS). I have a local EJB on my home network that I am testing and it connects with other EJBs and resources on my corporate network. While I don't think it is related to the problem, there is a cluster of the same EJB I am trying to test deployed to my corporate environment. i.e. I am testing ResetService EJB and there is a deployed domain cluster of several ResetService EJBs in the environment I am connecting to. My local server and admin domain are named differently than the one deployed and my local ejb shouldn't try to connect to the cluster in the environment.
When I make a call into my local ejb everything seems to work as expected until it gets to the commit part of the transaction (after my ejb returns). At this point the executing thread hangs at the following location
"[STUCK] ExecuteThread: '16' for queue: 'weblogic.kernel.Default (self-tuning)'" daemon prio=2 tid=0x2e8cb400 nid=0x22fc in Object.wait() [0x3031f000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x091a4b18> (a weblogic.transaction.internal.ServerTransactionImpl)
at weblogic.transaction.internal.ServerTransactionImpl.globalPrePrepare(ServerTransactionImpl.java:2130)
- locked <0x091a4b18> (a weblogic.transaction.internal.ServerTransactionImpl)
at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:266)
at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:233)
at weblogic.ejb.container.internal.BaseRemoteObject.postInvoke1(BaseRemoteObject.java:621)
at weblogic.ejb.container.internal.StatelessRemoteObject.postInvoke1(StatelessRemoteObject.java:60)
at weblogic.ejb.container.internal.BaseRemoteObject.postInvokeTxRetry(BaseRemoteObject.java:441)
at advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean_shnf9c_EOImpl.resetTradeByRate(ResetTradeServiceBean_shnf9c_EOImpl.java:1921)
at advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean_shnf9c_EOImpl_WLSkel.invoke(Unknown Source)
at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:589)
at weblogic.rmi.cluster.ClusterableServerRef.invoke(ClusterableServerRef.java:230)
at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:477)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:147)
at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:473)
at weblogic.rmi.internal.wls.WLSExecuteRequest.run(WLSExecuteRequest.java:118)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)This thread is obviously waiting for a notification from some other thread but it is unclear what exactly it is waiting for. This only seems to happen when I am connected via VPN because when I try this same setup connected directly to my corporate network at work it works fine.
I connected w/ debugger and tried to dig. Some interesting observations I see are.
1) The ServerTransactionImpl object that is hanging has a 'preferredHost' of a different machine on my corporate network. It is an EJB that my service got some information from but it wasn't involved in any create, update, or delete aspects of the tran so in theory it shouldn't be part of it. Does anyone know what exactly this peferredHost attribute is signifying? I also see this same remote ejb server reference defined in a ServerCoordinatorDescriptor variable.
2) I also sometimes see other XA resources (JMS) with this ServerTransactionImpl that point to references of resources that live on other machines on my corporate network (not my local ejb). This doesn't always seem to be the case though.
VPN will definetly block multicast packets from the corporate env back to my machine. I don't think it works from my machine to the corporate env machines either.
I also have two IPs when connected over VPN. One for my local home network and one given to me by my VPN client. I have setup my local WL server to listen on the VPN client provided IP which the corporate machines should be able to resolve and send TCP message over.
This definetly seems to be transaction related because when I remove the tran support from my ejb things work as expected. The problem is I need to test within the transaction just like it runs in a deployed environment so this isn't really a fix.
My theory: My local test WL EJB is waiting for a communication from some other server before it can commit the tran. This communication is blocked due to my VPN connection. I just don't know what exactly it is trying to communicate and how to resolve if that is in fact the problem. I thought about looking into unicast rather than multicast for cluster communication but in my mind that doesn't come into play here because I am not deploying a local cluster for my EJB and I thought multicast was just used for inter-cluster communication.
I also saw some docs talking about resources need to be uniquely named when inter-domain communication is performed (http://docs.oracle.com/cd/E13222_01/wls/docs81/ConsoleHelp/jta.html#1120356).
Anyone have any thoughts on what might be going on?
Here is some log output from my local WL instance:
Servers in this output--
My test Server - AdminServer+10.125.105.14:7001+LJDomain+t3
Server in env I read data from during ejb method call - TradeAccessTS_lkcma00061-1+171.149.24.240:17501+TradeSupportApplication2+t3
Server in env I read data from during ejb method call - MesaService_lkcma00065-1+171.149.24.244:17501+MESAApplication1+t3
Server in env I read data from during ejb method call - DataAccess_lkcma00055-0+171.149.24.234:17500+AdvantageApplication1+t3
Server in env I read data from during ejb method call - DataAccess_lkcma00053-6+171.149.24.232:17506+AdvantageApplication1+t3
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTA2PC> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0005C168CCCC337F43A8> <> <1332278606895> <BEA-000000> <BEA1-0005C168CCCC337F43A8: [EJB advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean.resetTradeByRate(java.lang.Long,java.util.Set,java.lang.String,java.lang.String,boolean,advantage.common.service.context.ServiceContext)]: ServerTransactionImpl.commit()>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTA2PC> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0005C168CCCC337F43A8> <> <1332278606895> <BEA-000000> <BEA1-0005C168CCCC337F43A8: [EJB advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean.resetTradeByRate(java.lang.Long,java.util.Set,java.lang.String,java.lang.String,boolean,advantage.common.service.context.ServiceContext)]: TX[BEA1-0005C168CCCC337F43A8] active-->pre_preparing>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTA2PC> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0005C168CCCC337F43A8> <> <1332278606896> <BEA-000000> <SC[LJDomain+AdminServer] active-->pre-preparing>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTA2PC> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0005C168CCCC337F43A8> <> <1332278606896> <BEA-000000> <BEA1-0005C168CCCC337F43A8: [EJB advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean.resetTradeByRate(java.lang.Long,java.util.Set,java.lang.String,java.lang.String,boolean,advantage.common.service.context.ServiceContext)]: before completion iteration #0>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTA2PC> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0005C168CCCC337F43A8> <> <1332278606896> <BEA-000000> <BEA1-0005C168CCCC337F43A8: [EJB advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean.resetTradeByRate(java.lang.Long,java.util.Set,java.lang.String,java.lang.String,boolean,advantage.common.service.context.ServiceContext)]: Synchronization[weblogic.ejb.container.internal.TxManager$TxListener@11b0cf9].beforeCompletion()>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTA2PC> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0005C168CCCC337F43A8> <> <1332278606897> <BEA-000000> <BEA1-0005C168CCCC337F43A8: [EJB advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean.resetTradeByRate(java.lang.Long,java.util.Set,java.lang.String,java.lang.String,boolean,advantage.common.service.context.ServiceContext)]: Synchronization[weblogic.ejb.container.internal.TxManager$TxListener@11b0cf9].beforeCompletion() END>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTA2PC> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0005C168CCCC337F43A8> <> <1332278606898> <BEA-000000> <SC[LJDomain+AdminServer] pre-preparing-->pre-prepared>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTA2PC> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0005C168CCCC337F43A8> <> <1332278606898> <BEA-000000> <SC[TradeSupportApplication2+TradeAccessTS_lkcma00061-1] active-->pre-preparing>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTA2PC> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0005C168CCCC337F43A8> <> <1332278606898> <BEA-000000> <BEA1-0005C168CCCC337F43A8: [EJB advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean.resetTradeByRate(java.lang.Long,java.util.Set,java.lang.String,java.lang.String,boolean,advantage.common.service.context.ServiceContext)]: delist weblogic.jdbc.wrapper.JTSXAResourceImpl, TMSUSPEND, beforeState=new, startThread=null>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTA2PC> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0005C168CCCC337F43A8> <> <1332278606898> <BEA-000000> <BEA1-0005C168CCCC337F43A8: [EJB advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean.resetTradeByRate(java.lang.Long,java.util.Set,java.lang.String,java.lang.String,boolean,advantage.common.service.context.ServiceContext)]: delist weblogic.jdbc.wrapper.JTSXAResourceImpl, afterStatenew>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTA2PC> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0005C168CCCC337F43A8> <> <1332278606899> <BEA-000000> <BEA1-0005C168CCCC337F43A8: [EJB advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean.resetTradeByRate(java.lang.Long,java.util.Set,java.lang.String,java.lang.String,boolean,advantage.common.service.context.ServiceContext)]: sc(TradeAccessTS_lkcma00061-1+171.149.24.240:17501+TradeSupportApplication2+t3+).startPrePrepareAndChain>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTA2PC> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0005C168CCCC337F43A8> <> <1332278606900> <BEA-000000> <SC[TradeSupportApplication2+TradeAccessTS_lkcma00061-1] pre-preparing-->pre-preparing>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTA2PC> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0005C168CCCC337F43A8> <> <1332278606902> <BEA-000000> <BEA1-0005C168CCCC337F43A8: [EJB advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean.resetTradeByRate(java.lang.Long,java.util.Set,java.lang.String,java.lang.String,boolean,advantage.common.service.context.ServiceContext)]: delist weblogic.jdbc.wrapper.JTSXAResourceImpl, TMSUSPEND, beforeState=new, startThread=null>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTA2PC> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0005C168CCCC337F43A8> <> <1332278606903> <BEA-000000> <BEA1-0005C168CCCC337F43A8: [EJB advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean.resetTradeByRate(java.lang.Long,java.util.Set,java.lang.String,java.lang.String,boolean,advantage.common.service.context.ServiceContext)]: delist weblogic.jdbc.wrapper.JTSXAResourceImpl, afterStatenew>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTANaming> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0005C168CCCC337F43A8> <> <1332278606903> <BEA-000000> <SecureAction.runAction Use Subject= <anonymous>for action:sc.startPrePrepareAndChain to t3://171.149.24.240:17501>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278606903> <BEA-000000> <PropagationContext: Peer=null, Version=4>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278606904> <BEA-000000> < +++ otherPeerInfo.getMajor() :: 10>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278606904> <BEA-000000> < +++ otherPeerInfo.getMinor() :: 3>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278606904> <BEA-000000> < +++ otherPeerInfo.getServicePack() :: 1>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278606905> <BEA-000000> < +++ otherPeerInfo.getRollingPatch() :: 0>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278606905> <BEA-000000> < +++ otherPeerInfo.hasTemporaryPatch() :: false>
####<Mar 20, 2012 4:23:26 PM CDT> <Debug> <JTA2PC> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0005C168CCCC337F43A8> <> <1332278606905> <BEA-000000> <BEA1-0005C168CCCC337F43A8: [EJB advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean.resetTradeByRate(java.lang.Long,java.util.Set,java.lang.String,java.lang.String,boolean,advantage.common.service.context.ServiceContext)]: waitForPrePrepareAcks AdminServer+10.125.105.14:7001+LJDomain+t3+=>pre-prepared TradeAccessTS_lkcma00061-1+171.149.24.240:17501+TradeSupportApplication2+t3+=>pre-preparing MesaService_lkcma00065-1+171.149.24.244:17501+MESAApplication1+t3+=>active DataAccess_lkcma00055-0+171.149.24.234:17500+AdvantageApplication1+t3+=>active DataAccess_lkcma00053-6+171.149.24.232:17506+AdvantageApplication1+t3+=>active>
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Then eventually I get timeout errors like below in log
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789379> <BEA-000000> < +++ otherPeerInfo.getMajor() :: 10>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789380> <BEA-000000> < +++ otherPeerInfo.getMinor() :: 3>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789380> <BEA-000000> < +++ otherPeerInfo.getServicePack() :: 1>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789381> <BEA-000000> < +++ otherPeerInfo.getRollingPatch() :: 0>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789382> <BEA-000000> < +++ otherPeerInfo.hasTemporaryPatch() :: false>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789382> <BEA-000000> < +++ Using new Method for reading rollback reason....>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTANaming> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789383> <BEA-000000> <RMI call coming from = TradeSupportApplication2>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTANaming> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789384> <BEA-000000> <SecureAction.stranger Subject used on received call: <anonymous> for action: startRollback AUTHENTICATION UNDETERMINABLE use SecurityInteropMode>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789385> <BEA-000000> <PropagationContext.getTransaction: tx=null>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789385> <BEA-000000> <PropagationContext.getTransaction: xid=BEA1-0005C168CCCC337F43A8, isCoordinator=true, infectCoordinatorFirstTime=false, txProps={weblogic.transaction.name=[EJB advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean.resetTradeByRate(java.lang.Long,java.util.Set,java.lang.String,java.lang.String,boolean,advantage.common.service.context.ServiceContext)], weblogic.jdbc=t3://171.149.24.240:17501}>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789391> <BEA-000000> < +++ looking up class : weblogic.transaction.internal.TimedOutException :: in classloader : java.net.URLClassLoader@15e92d7>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789397> <BEA-000000> < +++ looking up class : weblogic.transaction.TimedOutException :: in classloader : java.net.URLClassLoader@15e92d7>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789398> <BEA-000000> < +++ looking up class : java.lang.Exception :: in classloader : java.net.URLClassLoader@15e92d7>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789398> <BEA-000000> < +++ looking up class : java.lang.Throwable :: in classloader : java.net.URLClassLoader@15e92d7>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789402> <BEA-000000> < +++ looking up class : [Ljava.lang.StackTraceElement; :: in classloader : java.net.URLClassLoader@15e92d7>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789403> <BEA-000000> < +++ looking up class : java.lang.StackTraceElement :: in classloader : java.net.URLClassLoader@15e92d7>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTAPropagate> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789404> <BEA-000000> < +++ converted bytes to rollback reason : weblogic.transaction.internal.TimedOutException: Timed out tx=BEA1-0005C168CCCC337F43A8 after 30000 seconds>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTA2PC> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789404> <BEA-000000> <Name=[EJB advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean.resetTradeByRate(java.lang.Long,java.util.Set,java.lang.String,java.lang.String,boolean,advantage.common.service.context.ServiceContext)],Xid=BEA1-0005C168CCCC337F43A8(21880283),Status=Marked rollback. [Reason=weblogic.transaction.internal.TimedOutException: Timed out tx=BEA1-0005C168CCCC337F43A8 after 30000 seconds],numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=502,seconds left=29497,activeThread=Thread[[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)',5,Pooled Threads],XAServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(ServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(state=new,assigned=none),xar=null,re-Registered = false),SCInfo[LJDomain+AdminServer]=(state=pre-prepared),SCInfo[TradeSupportApplication2+TradeAccessTS_lkcma00061-1]=(state=pre-preparing),SCInfo[MESAApplication1+MesaService_lkcma00065-1]=(state=active),SCInfo[AdvantageApplication1+DataAccess_lkcma00055-0]=(state=active),SCInfo[AdvantageApplication1+DataAccess_lkcma00053-6]=(state=active),properties=({weblogic.transaction.name=[EJB advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean.resetTradeByRate(java.lang.Long,java.util.Set,java.lang.String,java.lang.String,boolean,advantage.common.service.context.ServiceContext)], weblogic.jdbc=t3://171.149.24.240:17501}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=AdminServer+10.125.105.14:7001+LJDomain+t3+, XAResources={LJDomain.AdminServer.JMSXASessionPool.advantage.jms.queue.sonicConnectionFactory, LJDomain.AdminServer.JMSXASessionPool.advantage.jms.topic.sonicConnectionFactory, WLStore_LJDomain__WLS_AdminServer, WSATGatewayRM_AdminServer_LJDomain},NonXAResources={})],CoordinatorURL=AdminServer+10.125.105.14:7001+LJDomain+t3+) wakeUpAfterSeconds(60)>
####<Mar 20, 2012 4:26:29 PM CDT> <Debug> <JTA2PC> <F68B599F56D71> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1332278789406> <BEA-000000> <BEA1-0005C168CCCC337F43A8: [EJB advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean.resetTradeByRate(java.lang.Long,java.util.Set,java.lang.String,java.lang.String,boolean,advantage.common.service.context.ServiceContext)]: setProperty: weblogic.transaction.name=[EJB advantage.tradesupport.rates.resets.ejb.ResetTradeServiceBean.resetTradeByRate(java.lang.Long,java.util.Set,java.lang.String,java.lang.String,boolean,advantage.common.service.context.ServiceContext)]>Hello asirigaya,
MSDTC configure does not belong to this forum, this forum topic is "Discuss client application development using Windows Forms controls and features."
I didn't see MSDN has this kind of forum so I will help you move this case to "Where is the forum for"
Regards,
Barry Wang
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Getting the final internal table of a transaction VA05
Hi,
Can anyone help me in getting the final internal table of a transaction VA05 when called from a program. I need to use to this internal table for further processing.
Thanks in advance,
Vinay.Hi,
Not quite sure what you are after but the following may help.
For a start VA05 uses FM RV_SALES_DOCUMENT_VIEW_3 to select it's data. Your program could simply call this. However, if you are doing a CALL TRANSACTION from your program to VA05 and you want to get the results back then you could utilise some 'old style' userexits available in VA05.
Two choices here:
1. At the end of FM RV_SALES_DOCUMENT_VIEW_3 there is a form END_MODIFICATION or
2. When VA05 calls RV_SALES_DOCUMENT_VIEW_3 it performs a user sort MV75AFZ1(MV75AFZ1)
Both of these belong to package VMOD which means you can modify them by simply registering the object in OSS (these are 'old style' exits before SMOD/CMOD and are prevelant in SD)
From either of these you could EXPORT to memory for when control is returned back to your program.
Thanks,
Pete -
Hi there,
I have a Sony Notebook SVS13A3Z9ES with preinstalled Windows 8 which I updated to Windows 8.1. The notebook has a
250GB SSD which had the following GPT partitions
Partition 1: SONYSYS - 260MB (Sony Vaio tools, type OEM)
Partition 2: Windows RE - 1,4 GB (Type OEM)
Partition 3: EFI System Partition - 260 MB (type System)
Partition 4: MSR 128 MB (type reserved)
Partition 5: About 240 GB - (incl. Windows, Type Prime)
Partition 6: Windows Recovery 350 MB (type OEM)
Partition 7: Recovery: 28 GB - OEM -> I created instead a Recovery USB stick and have now another prime partition
I then shrinked the Windows partition to 101GB with the Easeus Partition Master 3.0 to create another partition dedicated for data only. I didn't use the Windows built-in partition tool because I still wanted to shift the part.6,
being able to only extend part.7 for the final data partition. After I activated the shrinking via the Easeus I had to restart the notebook. I immediately came obviously to the Windows RE environment :
- asking me for selecting language
- afterwards I was brought to the Option Menue
* Continue Booting -> led me back to the language selection
* Turn-of PC and
* Trouble Shooting ->
Proceeding with Troubleshooting
* Recovery and Maintenance -> tried to start VAIO Care but couldn't access it -> back to Option Menu
* Refresh PC -> led to the message
“The drive where Windows is installed is locked. Unlock the drive and try again.”
It seems that the C: drive was locked after shrinkingthe partition with Easeus.
* Advanced Options ->
Proceeding with Advanced Options:
* Automatic Repair -> couldn't be executed, log file on C:. With cmd the C:\> drive can be initialized but Windows folder is not found (probably locked or hidden)
* Recovery Point Recovery -> no recovery points were detected -> back to Troubleshooting
* System Image Recovery -> no image points were detected -> back to Troubleshooting
* UEFI Firmware Settings -> led to the Vaio Care RescueMode screen which can be seen here
http://community.sony.com/t5/VAIO-Upgrade-Backup-Recovery/How-can-I-enter-bios-on-my-vaio-I-want-to-boot-from-the-cd-drive/td-p/60063
e.g. starting Windows from there led to WinRE again
* cmd window ->
Finally we have the cmd Window. Here are several postings in the internet with potential solutions but no one helped me so far. Example postings:
http://superuser.com/questions/460762/how-can-i-repair-the-windows-8-efi-bootloader
http://www.qliktips.com/2012/11/fix-windows-8-boot-issue.html
http://pcsupport.about.com/od/fixtheproblem/ht/rebuild-bcd-store-windows.htm
http://www.winhelp.us/repair-your-computer-in-windows-8.html
I tried so far:
- chkdsk -> executed, no problems
- Bootrec /fixmbr -> executed
- Bootrec /fixboot -> executed
- Bootrec /rebuildbcd, result:
Successfully scanned Windows installations.
Total identified Windows installations: 0
The operation completed successfully.
Restart brought me directly to language selection and Windows RE back again.
BCDBoot c:\Windows -> executed successfully but restart
back to WinRE
Then I assigned a letter to the EFI partition using diskpart v6.2.9200 and replaced bcd file with
1. method: BCDBoot c:\Windows /s z: /f ALL (z: letter of EFI partition) ->
executed successfully but restart back to WinRE
2. method: Bootrec /rebuildbcd -> executed successfully but restart back to WinRE
I had the recovery USB stick inserted but it was never used so far.
Do you still have other solution ideas restore the booting or an approach how to unlock the GPT c: drive with Windows again ? I would appreciate that very much.Hi,
According to your description,the system was damaged by the partition tool.
Since you upgraded to Windows 8.1,you cannot use Windows 8 recovery media to repair the system.
If you have some important data in the SSD,I suggest that you move the ssd to other computer,and backup the data.
As you said,you had the recovery USB stick inserted but it was never used so far.
For this issue,please change your BOIS setting for USB.
I suggest you use your recovery USB stick to reinstall your system.
Regards,
Kelvin Xu
TechNet Community Support -
TABLES FOR MASTER & TRANSACTIONAL DATA
Hi all ,
can some body help me with TABLES FOR MASTER & TRANSACTIONAL DATA
DP,SNP,PP/DP.
ThanksHi,
You can find out it as per your requirement.
Use transaction SE80 - ABAP workbench -->
select Repository Information system -->
Select ABAP Dictionary --> Data base table -->
In the right hand side window you will get screen in that give details as below
Standard selection screen
Table name : enter *
In the Application Component:
You will get Tree structure to Select Application Componenet
in that expand the node of SCM --> SCM APO --> SCM-APO-MD for Master Data
Here you will get application -- Double click on the required Application
e.g. Duble Click on SCM-APO-MD-PR for Product
Here you will get list of table related to application componenet Product
<b>/SAPAPO/MATKEY Product</b>
Some of the Master Data Tables:
/SAPAPO/APNTYPE APN Type
/SAPAPO/APNTYPET Alternative Product Number Type
/SAPAPO/APO01 APO Planning Version
/SAPAPO/APO01DEL Deletion Log File Versions
/SAPAPO/APPLOCS Location Master: Relevant Location Types for Application
/SAPAPO/APPLS Application Types : SAP Application Types
/SAPAPO/CD_LOC Customizing Change Documents Location
/SAPAPO/CD_PPRFL Customizing: Change Documents: Product Profiles
/SAPAPO/CD_PRDHD Customizing: Change Documents: Product
/SAPAPO/CD_PRDLC Customizing: Change Documents: Location Product
/SAPAPO/CD_PRDLW Customizing: Change Documents: Storage Type Product
/SAPAPO/CD_PRDWH Customizing: Change Documents: Storage Type Product
/SAPAPO/CONSPROF Model Consistency Check: Check Profile
/SAPAPO/CONSPROT Model Consistency Check: Check Profile Descriptions
/SAPAPO/CUSTCHK Model Consistency Check: User-Specific Checks
/SAPAPO/CUSTCHKT Model Consistency Check: Descriptions of User-Spec. Checks
/SAPAPO/GRPTYPE Product Group Types
/SAPAPO/GRPTYPET Description of Product Grouping
/SAPAPO/LOC Locations
/SAPAPO/LOC_SUB Mapping Table for Sublocations
/SAPAPO/LOCCOMP Table Obsolete Since SCM Release 4.1
/SAPAPO/LOCMAP Mapping Table for Locations
/SAPAPO/LOCMOT Location: Means-of-Transport-Dependent Attributes
/SAPAPO/LOCPROF Version-Dependent Location Profile for Penalty Determinati
/SAPAPO/LOCT Locations Short Text
/SAPAPO/LOCVER Location: Version-Dependent Fields
/SAPAPO/LTTYPE Location Resources
/SAPAPO/LTVER Location Resources: Version-Dependent Fields
/SAPAPO/MARM Units of Measure
/SAPAPO/MATAPN Alternative Product Number
/SAPAPO/MATBOD Product BOD Assignment
Hope this information will be helpful for you work on and find out Important tables for your own BP -
Long running table partitioning job
Dear HANA grus,
I've just finished table partitioning jobs for CDPOS(change document item) with 4 partitions by hash with 3 columns.
Total data volumn is around 340GB and the table size was 32GB !!!!!
(migration job was done without disabling CD, so currently deleting data on the table with RSCDOK99)
Before partitioning, the data volumn of the table was around 32GB.
After partitioning, the size has changed to 25GB.
It took around One and half hour with exclusive lock as mentioned in the HANA adminitration guide.
(It is QA DB, so less complaints)
I thought that I might not can do this in the production DB.
Does anyone hava any idea for accelerating this task?? (This is the fastest DBMS HANA!!!!)
Or Do you have any plan for online table partitioning functionality??(To HANA Development team)
Any comments would be appreciate.
Cheers,
- JasonJason,
looks like we're cross talking here...
What was your rationale to partition the table in the first place?
=> To reduce deleting time of CDPOS (As I mentioned it was almost 10% quantity of whole Data volume, So I would like to save deleting time of the table from any pros of partitioning table like partitioning pruning)
Ok, I see where you're coming from, but did you ever try out if your idea would actually work?
As deletion of data is heavily related with locating the records to be deleted, creating an index would have probably be the better choice.
Thinking about it... you want to get rid of 10% of your data and in order to speed the overall process up, you decide to move 100% of the data into sets of 25% of the data - equally holding their 25% share of the 10% records to be deleted.
The deletion then should run along these 4 sets of 25% of data.
It's surely me, but where is the speedup potential here?
How many unloads happened during the re-partitioning?
=> It was fully uploaded in the memory before partitioning the table by myself.(from HANA studio)
I was actually asking about unloads _during_ the re-partitioning process. Check M_CS_UNLOADS for the time frame in question.
How do the now longer running SQL statements look like?
=> As i mentioned selecting/deleting increased almost twice.
That's not what I asked.
Post the SQL statement text that was taking longer.
What are the three columns you picked for partitioning?
=> mandant, objectclas, tabname(QA has 2 clients and each of them have nearly same rows of the table)
Why those? Because these are the primary key?
I wouldn't be surprised if the SQL statements only refer to e.g. MANDT and TABNAME in the WHERE clause.
In that case the partition pruning cannot work and all partitions have to be searched.
How did you come up with 4 partitions? Why not 13, 72 or 213?
=> I thought each partitions' size would be 8GB(32GB/4) if they are divided into same size(just simple thought), and 8GB size is almost same size like other largest top20 tables in the HANA DB.
Alright, so basically that was arbitrary.
For the last comment of your reply, most people would do partition for their existing large tables to get any benefit of partitioning(just like me). I think your comment can be applied for the newly inserting data.
Well, not sure what "most people" would do.
HASH partitioning a large existing table certainly is not an activity that is just triggered off in a production system. Adding partitions to a range partitions table however happens all the time.
- Lars
Maybe you are looking for
-
Problems with language support (interpunction)
Hi, i am writing screenplays in slovak language and i have encountered a serious problem with interpunction in Adobe Story. There are few letters that are not working. Concretely ľ,š,č,ť,ž. Other ones in the same row (ý,á,í,é) are working correctly.
-
Windows 8.1 thumbnails not showing in explorer
I have Windows 8.1 and it does not show any thumbnails. I've cleared the thumbnail cache and checked the view settings and they are both set to show thumbnails. Any ideas? Thanks why767
-
I have a dsl internet conection with a download speed of .66 mbps. I would like to watch netflix with a wireless conection to apple tv. What is the required internet speed?
-
Doesn't seem like anyone else has seen this. iPhone 4S. Using Nike+ Running app and (streaming radio with Iheartradio app or streaming in safari or I believe I once listened to a podcast). While I'm jogging the phone will shutdown and will not turn
-
My 3g iphone keeps crashing and shows very no battery
Phone shows fully charged then suddenly goes off then shows battery completely discharged. Plug into charger and immediately starts again showing full charge. Sstem also seems very slow to respond to keypad. Any help would be gratefully received. Rec