Hr/hr schema Locked
Hi,
I have problem in log in to HR/HR schema.
USERNAME ACCOUNT_STATUS LOCK_DATE
SCOTT LOCKED 30-MAR-09
HR LOCKED 30-MAR-09
so I did this:
alter user HR account UNLOCK;
alter user SCOTT account UNLOCK;
then:
USERNAME ACCOUNT_STATUS LOCK_DATE
SCOTT OPEN
HR OPEN
but still when I tried use Oracle SQL Developer to log in to HR/HR I got a message:
ORA-28000: the account is locked
Cause: The user has entered wrong password consequently for maximum number of times specified by the user's profile parameter FAILED_LOGIN_ATTEMPTS, or the DBA has locked the account
Action: Wait for PASSWORD_LOCK_TIME or contact DBA
With account SCOTT/TIGER there is no problem.
I do have DBA privileges.
The value of FAILED_LOGIN_ATTEMPTS = 4 and PASSWORD_LOCK_TIME is Unlimited
What should I do to unlock HR/HR and be able to use Oracle SQL Developer to log in?
You did the unlock correctly.
Now you need to go into SQL Developer and verify
1) that you deployed the correct password (hint, google for 'alter user identified by')
2) you are using the correct database
3) yoiu tested the connection, but you did not test it so often while it failed that the account become locked again
Similar Messages
-
SQLServer 2008 R2: The SCHEMA LOCK permission was denied on the object
Hi all,
I encounter the following error while developing a SSRS project that connects to a SQL Server Database view:
"Msg 229, Level 14, State 71, Procedure sp_getschemalock, Line 1
The SCHEMA LOCK permission was denied on the object 'Table4', database 'DBRemote', schema 'dbo'."
That view uses a linked server to select data from a remote SQL Server Database (SQL Server 2005).
There are no sql hints specified in my views
My view T-SQL is:
Select
From linksv.DBRemote.dbo.Table1 T1
Inner Join linksv.DBRemote.dbo.Table2 T2 On
T1.fk1 = T2.pk1
Inner Join view1 v1 On
T2.fk2 = v1.pk2
My t-sql for view1 is:
Select
From linksv.DBRemote.dbo.Table3 T3
Inner Join linksv.DBRemote.dbo.Table4 T4 On
t3.fk1 = T4.pk1
The object specified in error message above refers to Table "linksv.DBRemote.dbo.Table4" (see view above)
SQL Server Permissions are set for all objects involved in the queries above.
The funny thing is that the error occurs when I run my report from the report server webinterface
and my report project is loaded in BIDS at the same time.
The error occurs when I execute the query in SSMS 2008 and also when I run the query
in BIDS 2008 Query designer.
I also wondering why the error referes to the "linksv.DBRemote.dbo.Table4" remote object only
but not to the other remote objects in that query.
Im not sure where to look any further on what might cause this error.
Appreciate any help very much.
Thanks
Bodoyes, this error happens because the login that is mapped on the second side of the linked-server is missing the read permission on the object. All queries done trhough linked-server will acquire a schema-lock just so that SQL can read the results correctly.
I don't know exactly WHY it does it this way, but it does.
to fix the error message, give the required permission to the login on the server that is target of the linked-server configuration - be it Windows Authentication, SQL Login, or connections "made using the logins current security context". The preferable
way is to map 1-to-1 every login that will be used in the Security tab of the Linked Server Properties page.
I made a post about this:
http://thelonelydba.wordpress.com/2013/04/17/sql-and-linked-servers-the-schema-lock-permission-was-denied-on-the-object/ -
[CDO] Schema Locked Exception after update to CDO 4.4
Hi
after updating to CDO 4.4 i get a Schema locked exception (see below). This happens as soon as i try to commit a new EMF-model to the server (even with a "fresh" server and repository).
I'm using the Derby embedded DBStore with a Security Manager, audits enabled and ensureReferentialIntegrity enabled.
H2 does not work either.
The Exception:
org.eclipse.emf.cdo.util.CommitException: Rollback in DBStore: org.eclipse.net4j.db.DBException: Schema locked: SCHEMA_NAME
at org.eclipse.net4j.internal.db.ddl.DBSchema.assertUnlocked(DBSchema.java:278)
at org.eclipse.net4j.internal.db.ddl.DBTable.assertUnlocked(DBTable.java:340)
at org.eclipse.net4j.internal.db.ddl.DBTable.addField(DBTable.java:104)
at org.eclipse.net4j.internal.db.ddl.DBTable.addField(DBTable.java:89)
at org.eclipse.emf.cdo.server.db.mapping.AbstractTypeMapping.createDBField(AbstractTypeMapping.java:162)
at org.eclipse.emf.cdo.server.internal.db.mapping.horizontal.AbstractHorizontalClassMapping.initFields(AbstractHorizontalClassMapping.java:197)
at org.eclipse.emf.cdo.server.internal.db.mapping.horizontal.AbstractHorizontalClassMapping.<init>(AbstractHorizontalClassMapping.java:107)
at org.eclipse.emf.cdo.server.internal.db.mapping.horizontal.HorizontalBranchingClassMapping.<init>(HorizontalBranchingClassMapping.java:205)
at org.eclipse.emf.cdo.server.internal.db.mapping.horizontal.HorizontalBranchingMappingStrategy.doCreateClassMapping(HorizontalBranchingMappingStrategy.java:62)
at org.eclipse.emf.cdo.server.internal.db.mapping.AbstractMappingStrategy.createClassMapping(AbstractMappingStrategy.java:659)
at org.eclipse.emf.cdo.server.internal.db.mapping.AbstractMappingStrategy.mapClasses(AbstractMappingStrategy.java:651)
at org.eclipse.emf.cdo.server.internal.db.mapping.AbstractMappingStrategy.mapPackageInfos(AbstractMappingStrategy.java:627)
at org.eclipse.emf.cdo.server.internal.db.mapping.AbstractMappingStrategy.mapPackageUnits(AbstractMappingStrategy.java:616)
at org.eclipse.emf.cdo.server.internal.db.mapping.AbstractMappingStrategy.createMapping(AbstractMappingStrategy.java:538)
at org.eclipse.emf.cdo.server.internal.db.mapping.horizontal.HorizontalMappingStrategy.createMapping(HorizontalMappingStrategy.java:144)
at org.eclipse.emf.cdo.server.internal.db.DBStoreAccessor.writePackageUnits(DBStoreAccessor.java:849)
at org.eclipse.emf.cdo.spi.server.StoreAccessor.doWrite(StoreAccessor.java:81)
at org.eclipse.emf.cdo.server.internal.db.DBStoreAccessor.doWrite(DBStoreAccessor.java:828)
at org.eclipse.emf.cdo.spi.server.StoreAccessorBase.write(StoreAccessorBase.java:152)
at org.eclipse.emf.cdo.internal.server.TransactionCommitContext.write(TransactionCommitContext.java:651)
at org.eclipse.emf.cdo.spi.server.InternalCommitContext$1.runLoop(InternalCommitContext.java:48)
at org.eclipse.emf.cdo.spi.server.InternalCommitContext$1.runLoop(InternalCommitContext.java:1)
at org.eclipse.net4j.util.om.monitor.ProgressDistributor.run(ProgressDistributor.java:96)
at org.eclipse.emf.cdo.internal.server.Repository.commitUnsynced(Repository.java:1133)
at org.eclipse.emf.cdo.internal.server.Repository.commit(Repository.java:1126)
at org.eclipse.emf.cdo.server.internal.net4j.protocol.CommitTransactionIndication.indicatingCommit(CommitTransactionIndication.java:320)
at org.eclipse.emf.cdo.server.internal.net4j.protocol.CommitTransactionIndication.indicating(CommitTransactionIndication.java:105)
at org.eclipse.emf.cdo.server.internal.net4j.protocol.CDOServerIndicationWithMonitoring.indicating(CDOServerIndicationWithMonitoring.java:110)
at org.eclipse.net4j.signal.IndicationWithMonitoring.indicating(IndicationWithMonitoring.java:98)
at org.eclipse.net4j.signal.IndicationWithResponse.doExtendedInput(IndicationWithResponse.java:100)
at org.eclipse.net4j.signal.Signal.doInput(Signal.java:330)
at org.eclipse.net4j.signal.IndicationWithResponse.execute(IndicationWithResponse.java:73)
at org.eclipse.net4j.signal.IndicationWithMonitoring.execute(IndicationWithMonitoring.java:67)
at org.eclipse.net4j.signal.Signal.runSync(Signal.java:254)
at org.eclipse.net4j.signal.Signal.run(Signal.java:149)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)Hi Samuel,
This is the typical exception which occurs when you change your
metamodel and the models stored in CDO repo haven't been migrated.
Best Regards.
Le 13/07/2015 17:14, Samuel Leisering a écrit :
> Hi
>
> after updating to CDO 4.4 i get a Schema locked exception (see below).
> This happens as soon as i try to commit a new EMF-model to the server
> (even with a "fresh" server and repository).
> I'm using the Derby embedded DBStore with a Security Manager, audits
> enabled and ensureReferentialIntegrity enabled.
>
> H2 does not work either.
>
> The Exception:
>
> org.eclipse.emf.cdo.util.CommitException: Rollback in DBStore:
> org.eclipse.net4j.db.DBException: Schema locked: SCHEMA_NAME
> at
> org.eclipse.net4j.internal.db.ddl.DBSchema.assertUnlocked(DBSchema.java:278)
>
> at
> org.eclipse.net4j.internal.db.ddl.DBTable.assertUnlocked(DBTable.java:340)
> at
> org.eclipse.net4j.internal.db.ddl.DBTable.addField(DBTable.java:104)
> at org.eclipse.net4j.internal.db.ddl.DBTable.addField(DBTable.java:89)
> at
> org.eclipse.emf.cdo.server.db.mapping.AbstractTypeMapping.createDBField(AbstractTypeMapping.java:162)
>
> at
> org.eclipse.emf.cdo.server.internal.db.mapping.horizontal.AbstractHorizontalClassMapping.initFields(AbstractHorizontalClassMapping.java:197)
>
> at
> org.eclipse.emf.cdo.server.internal.db.mapping.horizontal.AbstractHorizontalClassMapping.<init>(AbstractHorizontalClassMapping.java:107)
>
> at
> org.eclipse.emf.cdo.server.internal.db.mapping.horizontal.HorizontalBranchingClassMapping.<init>(HorizontalBranchingClassMapping.java:205)
>
> at
> org.eclipse.emf.cdo.server.internal.db.mapping.horizontal.HorizontalBranchingMappingStrategy.doCreateClassMapping(HorizontalBranchingMappingStrategy.java:62)
>
> at
> org.eclipse.emf.cdo.server.internal.db.mapping.AbstractMappingStrategy.createClassMapping(AbstractMappingStrategy.java:659)
>
> at
> org.eclipse.emf.cdo.server.internal.db.mapping.AbstractMappingStrategy.mapClasses(AbstractMappingStrategy.java:651)
>
> at
> org.eclipse.emf.cdo.server.internal.db.mapping.AbstractMappingStrategy.mapPackageInfos(AbstractMappingStrategy.java:627)
>
> at
> org.eclipse.emf.cdo.server.internal.db.mapping.AbstractMappingStrategy.mapPackageUnits(AbstractMappingStrategy.java:616)
>
> at
> org.eclipse.emf.cdo.server.internal.db.mapping.AbstractMappingStrategy.createMapping(AbstractMappingStrategy.java:538)
>
> at
> org.eclipse.emf.cdo.server.internal.db.mapping.horizontal.HorizontalMappingStrategy.createMapping(HorizontalMappingStrategy.java:144)
>
> at
> org.eclipse.emf.cdo.server.internal.db.DBStoreAccessor.writePackageUnits(DBStoreAccessor.java:849)
>
> at
> org.eclipse.emf.cdo.spi.server.StoreAccessor.doWrite(StoreAccessor.java:81)
> at
> org.eclipse.emf.cdo.server.internal.db.DBStoreAccessor.doWrite(DBStoreAccessor.java:828)
>
> at
> org.eclipse.emf.cdo.spi.server.StoreAccessorBase.write(StoreAccessorBase.java:152)
>
> at
> org.eclipse.emf.cdo.internal.server.TransactionCommitContext.write(TransactionCommitContext.java:651)
>
> at
> org.eclipse.emf.cdo.spi.server.InternalCommitContext$1.runLoop(InternalCommitContext.java:48)
>
> at
> org.eclipse.emf.cdo.spi.server.InternalCommitContext$1.runLoop(InternalCommitContext.java:1)
>
> at
> org.eclipse.net4j.util.om.monitor.ProgressDistributor.run(ProgressDistributor.java:96)
>
> at
> org.eclipse.emf.cdo.internal.server.Repository.commitUnsynced(Repository.java:1133)
>
> at
> org.eclipse.emf.cdo.internal.server.Repository.commit(Repository.java:1126)
> at
> org.eclipse.emf.cdo.server.internal.net4j.protocol.CommitTransactionIndication.indicatingCommit(CommitTransactionIndication.java:320)
>
> at
> org.eclipse.emf.cdo.server.internal.net4j.protocol.CommitTransactionIndication.indicating(CommitTransactionIndication.java:105)
>
> at
> org.eclipse.emf.cdo.server.internal.net4j.protocol.CDOServerIndicationWithMonitoring.indicating(CDOServerIndicationWithMonitoring.java:110)
>
> at
> org.eclipse.net4j.signal.IndicationWithMonitoring.indicating(IndicationWithMonitoring.java:98)
>
> at
> org.eclipse.net4j.signal.IndicationWithResponse.doExtendedInput(IndicationWithResponse.java:100)
>
> at org.eclipse.net4j.signal.Signal.doInput(Signal.java:330)
> at
> org.eclipse.net4j.signal.IndicationWithResponse.execute(IndicationWithResponse.java:73)
>
> at
> org.eclipse.net4j.signal.IndicationWithMonitoring.execute(IndicationWithMonitoring.java:67)
>
> at org.eclipse.net4j.signal.Signal.runSync(Signal.java:254)
> at org.eclipse.net4j.signal.Signal.run(Signal.java:149)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
>
Esteban Dugueperoux - Obeo
Need professional services for Sirius?
http://www.obeodesigner.com/sirius -
Hi all,
Schema password has been locked by attempting 3 or more times by a user, i want to know the user by whom the password is locked, by which query i get it ?
thanks.On a quiet system, that may give you some idea, but on a busy system, with dozens or hundreds of users, how will you discern the difference between successful logins and failures, just by looking at sqlnet.log?
As already pointed out, unless the appropriate level of auditing was previously enabled, the information is simply not available anywhere.
-Mark -
Gold day everyone.
i mistakenly entered the wrong password into my schema while trying to log in. Now I am locked out. is there a way to unlock the schema. I have the right password now.
help urgently needed.
THank you.Hello,
You can connect as sys or system (in fact you must connect to a User who has the ALTER USER system privilege) and execute the following statement:
alter user <your_user> account unlock;
Hope this help.
Best regards,
Jean-Valentin Lubiez
Message was edited by: LubiezJean-Valentin -
Hi everybody!
I have in a little trouble ....
*** Scenario.
Oracle Enterprise Edition 11gr2 over Oracle Solaris 11
3 instances: PROD, QA, DEV
1 ASM instance
Listener ports: 1520, 1530 & 1540
Those are production system
*** The problem:
Customer can't pay Oracle EE, they want to change to Oracle Std One
Limitations:
I cant have a similar system to test.
*** My idea:
Install Oracle Std One, in the same path:
/u01/app/oracle/product/11.2.0/db <-- EE
/u01/app/oracle/product/11.2.0/db_one <-- Std Ed One
ASM:
/u01/app/11.2.0/grid <-- EE +ASM
/u01/app/11.2.0/grid_one <-- Std Ed One +ASM1
Create databases:
Asign similar disks to +ASM1
Create DGs on +ASM1
Create databases: PROD, QA, DEV for Std Ed one
- is it posible that they have same instance name?
- if not, i suppose create as: PROD1, QA1, DEV1
Configure Listener port : 1521, 1531 & 1541
Migration technique:
Create Tablespaces, schemas
Lock users in EE
down listener: 1520, 1530 & 1540
Import data of final users in a window maintanance
configure and up listener: 1521, 1531 & 1541
Customer should test access and apps in new
Please tell me if my Idea is reallistic, all comments are going to be very apreciated.sol.beach
I suppose end customer use a port per database in order to separate in a logical way the access to the databases,
i mean PROD, QA & DEV has the same users.
Hemant K Chitale
1. I've seen physical servers with 3 ASM instances: +ASM1, +ASM2, +ASM3
on Solaris & Oracle 10g, I suppose is posible on 11g
2. The sever has occupied 2 sockets (2 physical CPU), so I can use Oracle Standard Ed One.
JohnWatson2
Thanks for your comments.
WadhahDaouehi
1. - You cannot run two ASM instance simultaneously on one Server, but you can run many Oracle database simultaneously
on the same server which they use the ASM as storage type.
As a mentioned to Hemant K Chitale, I've seen a system with several instances running
About
"you can run many Oracle database simultaneously on the same server which they use the ASM as storage type."
I'm not sure if i can use the same ASM which is part of a current Enterprise Ed over the new installation with Oracle Std Ed One.
2. - Why you want the same instance name?
If it is about the service name, which has by default the same instance name, just you can create a different instance name
and create the service name with the name that you wish.
SQL> alter system set service_names='instance_name';
Is a similar name, not the same
PROD, QA & DEV
PROD1, QA1 & DEV1
I consider add "1" at the end to refer "Oracle Standar Ed One"
But I agree with you, i can customize throught service_name.
Regards,
Abraham Mtz. -
Multiple objects with same name when rebuilding index online
I am looking for advice on how to handle a race-condition in sql server, related to reading metadata while an online index rebuild is being performed.
The problem is as follows:
At some point we execute the following statement:
SELECT
obj.object_id AS id,
scm.name AS scm,
obj.name AS name,
obj.type AS type,
ds.name AS dataspace_name,
prop.value AS description,
part.data_compression_desc as compression_desc
FROM sys.objects AS obj
INNER JOIN sys.schemas AS scm ON obj.schema_id = scm.schema_id
INNER JOIN sys.indexes AS idx ON obj.object_id = idx.object_id AND idx.type IN (0,1)
INNER JOIN sys.data_spaces AS ds ON idx.data_space_id = ds.data_space_id
INNER JOIN (SELECT object_id, data_compression_desc FROM sys.partitions WHERE index_id IN (0,1) /*Heap, Clustered*/) AS part ON part.object_id = obj.object_id
LEFT OUTER JOIN sys.extended_properties AS prop ON obj.object_id = prop.major_id AND prop.minor_id = 0 AND prop.class = 1 AND prop.name = 'Description'
WHERE obj.type = 'U' OR obj.type = 'S'";
The statement returns some metadata for indexes (indices?), the purpose of wich is not the subject.
When executed while an online index rebuild is running, a race condition occurs: When the rebuilding enters the final phase, the new index, which have the same name, becomes visible and thus results in two rows with the same name (from sys.object). I am unaware
if this only occurs for clustered index (which is what we have observed).
We became aware of this behaviour, as we added the metadata to a .Net Dictionary using name as key, and received a duplicate key exception. We have, hoewever, not been able to reproduce the situation, due to the nature of the race condition, and we found very
little documentation on the subject.
What we would like to do now, is to differentiate between the two. We see two options:
1) We could just use the first of the rows and ignore the second. This solution would require that the metadata for both rows are identical.
2) We could discern the "real" index from the "rebuilding" index. This requires some kind of extension of the Where-part.
We have not been able to determine if the requirements for either option is present, as we havent found any documentation, nor have we been able to test for differences, as we cannot reproduce the situation.
We would also like some way of reproducing the situation, so ideas as to how to do that is welcome.
Can anyone direct me to relevant documentation, or alternate solutions.
HRP1. Use the index with the lower fragmentation to identify the newly rebuilt index (as it almost always will have lower fragmentation)
2. To reproduce, block the online index rebuild process by trying to alter the table's definition in a transaction (and don't commit, which will place schema lock on the table)
Satish Kartan http://www.sqlfood.com/ -
Getting random LCK_M_SCH_M on convert and bulk insert task
I starting getting random LCK_M_SCH_M locks with huge wait time, which hung my etl proccess.
The ssis package runs like this:
I have 4 containers that run in parallel and do the same thing:
-Convert a tab delimited file from unicode->utf8
-Truncate the table (within a foreach loop)
-Bulk insert the data
Also transactionoption is set to NotSupported.
What could be causing the lock?
All foreach loops do not overlap ragarding tables/files.
Do they contest somehow?
EliasThe truncate table command imposes the schema lock so you will have to not to run in parallel this task
Arthur
MyBlog
Twitter -
Fact and dimension table partition
My team is implementing new data-warehouse. I would like to know that when should we plan to do partition of fact and dimension table, before data comes in or after?
Hi,
It is recommended to partition Fact table (Where we will have huge data). Automate the partition so that each day it will create a new partition to hold latest data (Split the previous partition into 2). Best practice is to create partition on transaction
timestamps so load the incremental data into a empty table called (Table_IN) and then Switch that data into main table (Table). Make sure your tables (Table and Table_IN) should be on one file group.
Refer below content for detailed info
Designing and Administrating Partitions in SQL Server 2012
A popular method of better managing large and active tables and indexes is the use of partitioning. Partitioning is a feature for segregating I/O workload within
SQL Server database so that I/O can be better balanced against available I/O subsystems while providing better user response time, lower I/O latency, and faster backups and recovery. By partitioning tables and indexes across multiple filegroups, data retrieval
and management is much quicker because only subsets of the data are used, meanwhile ensuring that the integrity of the database as a whole remains intact.
Tip
Partitioning is typically used for administrative or certain I/O performance scenarios. However, partitioning can also speed up some queries by enabling
lock escalation to a single partition, rather than to an entire table. You must allow lock escalation to move up to the partition level by setting it with either the Lock Escalation option of Database Options page in SSMS or by using the LOCK_ESCALATION option
of the ALTER TABLE statement.
After a table or index is partitioned, data is stored horizontally across multiple filegroups, so groups of data are mapped to individual partitions. Typical
scenarios for partitioning include large tables that become very difficult to manage, tables that are suffering performance degradation because of excessive I/O or blocking locks, table-centric maintenance processes that exceed the available time for maintenance,
and moving historical data from the active portion of a table to a partition with less activity.
Partitioning tables and indexes warrants a bit of planning before putting them into production. The usual approach to partitioning a table or index follows these
steps:
1. Create
the filegroup(s) and file(s) used to hold the partitions defined by the partitioning scheme.
2. Create
a partition function to map the rows of the table or index to specific partitions based on the values in a specified column. A very common partitioning function is based on the creation date of the record.
3. Create
a partitioning scheme to map the partitions of the partitioned table to the specified filegroup(s) and, thereby, to specific locations on the Windows file system.
4. Create
the table or index (or ALTER an existing table or index) by specifying the partition scheme as the storage location for the partitioned object.
Although Transact-SQL commands are available to perform every step described earlier, the Create Partition Wizard makes the entire process quick and easy through
an intuitive point-and-click interface. The next section provides an overview of using the Create Partition Wizard in SQL Server 2012, and an example later in this section shows the Transact-SQL commands.
Leveraging the Create Partition Wizard to Create Table and Index Partitions
The Create Partition Wizard can be used to divide data in large tables across multiple filegroups to increase performance and can be invoked by right-clicking
any table or index, selecting Storage, and then selecting Create Partition. The first step is to identify which columns to partition by reviewing all the columns available in the Available Partitioning Columns section located on the Select a Partitioning Column
dialog box, as displayed in Figure 3.13. This screen also includes additional options such as the following:
Figure 3.13. Selecting a partitioning column.
The next screen is called Select a Partition Function. This page is used for specifying the partition function where the data will be partitioned. The options
include using an existing partition or creating a new partition. The subsequent page is called New Partition Scheme. Here a DBA will conduct a mapping of the rows selected of tables being partitioned to a desired filegroup. Either a new partition scheme should
be used or a new one needs to be created. The final screen is used for doing the actual mapping. On the Map Partitions page, specify the partitions to be used for each partition and then enter a range for the values of the partitions. The
ranges and settings on the grid include the following:
Note
By opening the Set Boundary Values dialog box, a DBA can set boundary values based on dates (for example, partition everything in a column after a specific
date). The data types are based on dates.
Designing table and index partitions is a DBA task that typically requires a joint effort with the database development team. The DBA must have a strong understanding
of the database, tables, and columns to make the correct choices for partitioning. For more information on partitioning, review Books Online.
Enhancements to Partitioning in SQL Server 2012
SQL Server 2012 now supports as many as 15,000 partitions. When using more than 1,000 partitions, Microsoft recommends that the instance of SQL Server have at
least 16Gb of available memory. This recommendation particularly applies to partitioned indexes, especially those that are not aligned with the base table or with the clustered index of the table. Other Data Manipulation Language statements (DML) and Data
Definition Language statements (DDL) may also run short of memory when processing on a large number of partitions.
Certain DBCC commands may take longer to execute when processing a large number of partitions. On the other hand, a few DBCC commands can be scoped to the partition
level and, if so, can be used to perform their function on a subset of data in the partitioned table.
Queries may also benefit from a new query engine enhancement called partition elimination. SQL Server uses partition enhancement automatically if it is available.
Here’s how it works. Assume a table has four partitions, with all the data for customers whose names begin with R, S, or T in the third partition. If a query’s WHERE clause
filters on customer name looking for ‘System%’, the query engine knows that it needs only to partition three to answer
the request. Thus, it might greatly reduce I/O for that query. On the other hand, some queries might take longer if there are more than 1,000 partitions and the query is not able to perform partition elimination.
Finally, SQL Server 2012 introduces some changes and improvements to the algorithms used to calculate partitioned index statistics. Primarily, SQL Server 2012
samples rows in a partitioned index when it is created or rebuilt, rather than scanning all available rows. This may sometimes result in somewhat different query behavior compared to the same queries running on SQL Server 2012.
Administrating Data Using Partition Switching
Partitioning is useful to access and manage a subset of data while losing none of the integrity of the entire data set. There is one limitation, though. When
a partition is created on an existing table, new data is added to a specific partition or to the default partition if none is specified. That means the default partition might grow unwieldy if it is left unmanaged. (This concept is similar to how a clustered
index needs to be rebuilt from time to time to reestablish its fill factor setting.)
Switching partitions is a fast operation because no physical movement of data takes place. Instead, only the metadata pointers to the physical data are altered.
You can alter partitions using SQL Server Management Studio or with the ALTER TABLE...SWITCH
Transact-SQL statement. Both options enable you to ensure partitions are
well maintained. For example, you can transfer subsets of data between partitions, move tables between partitions, or combine partitions together. Because the ALTER TABLE...SWITCH statement
does not actually move the data, a few prerequisites must be in place:
• Partitions must use the same column when switching between two partitions.
• The source and target table must exist prior to the switch and must be on the same filegroup, along with their corresponding indexes,
index partitions, and indexed view partitions.
• The target partition must exist prior to the switch, and it must be empty, whether adding a table to an existing partitioned table
or moving a partition from one table to another. The same holds true when moving a partitioned table to a nonpartitioned table structure.
• The source and target tables must have the same columns in identical order with the same names, data types, and data type attributes
(length, precision, scale, and nullability). Computed columns must have identical syntax, as well as primary key constraints. The tables must also have the same settings for ANSI_NULLS and QUOTED_IDENTIFIER properties.
Clustered and nonclustered indexes must be identical. ROWGUID properties
and XML schemas must match. Finally, settings for in-row data storage must also be the same.
• The source and target tables must have matching nullability on the partitioning column. Although both NULL and NOT
NULL are supported, NOT
NULL is strongly recommended.
Likewise, the ALTER TABLE...SWITCH statement
will not work under certain circumstances:
• Full-text indexes, XML indexes, and old-fashioned SQL Server rules are not allowed (though CHECK constraints
are allowed).
• Tables in a merge replication scheme are not allowed. Tables in a transactional replication scheme are allowed with special caveats.
Triggers are allowed on tables but must not fire during the switch.
• Indexes on the source and target table must reside on the same partition as the tables themselves.
• Indexed views make partition switching difficult and have a lot of extra rules about how and when they can be switched. Refer to
the SQL Server Books Online if you want to perform partition switching on tables containing indexed views.
• Referential integrity can impact the use of partition switching. First, foreign keys on other tables cannot reference the source
table. If the source table holds the primary key, it cannot have a primary or foreign key relationship with the target table. If the target table holds the foreign key, it cannot have a primary or foreign key relationship with the source table.
In summary, simple tables can easily accommodate partition switching. The more complexity a source or target table exhibits, the more likely that careful planning
and extra work will be required to even make partition switching possible, let alone efficient.
Here’s an example where we create a partitioned table using a previously created partition scheme, called Date_Range_PartScheme1.
We then create a new, nonpartitioned table identical to the partitioned table residing on the same filegroup. We finish up switching the data from the partitioned table into the nonpartitioned table:
CREATE TABLE TransactionHistory_Partn1 (Xn_Hst_ID int, Xn_Type char(10)) ON Date_Range_PartScheme1 (Xn_Hst_ID) ; GO CREATE TABLE TransactionHistory_No_Partn (Xn_Hst_ID int, Xn_Type
char(10)) ON main_filegroup ; GO ALTER TABLE TransactionHistory_Partn1 SWITCH partition1 TO TransactionHistory_No_Partn; GO
The next section shows how to use a more sophisticated, but very popular, approach to partition switching called a sliding
window partition.
Example and Best Practices for Managing Sliding Window Partitions
Assume that our AdventureWorks business is booming. The sales staff, and by extension the AdventureWorks2012 database, is very busy. We noticed over time that
the TransactionHistory table is very active as sales transactions are first entered and are still very active over their first month in the database. But the older the transactions are, the less activity they see. Consequently, we’d like to automatically group
transactions into four partitions per year, basically containing one quarter of the year’s data each, in a rolling partitioning. Any transaction older than one year will be purged or archived.
The answer to a scenario like the preceding one is called a sliding window partition because
we are constantly loading new data in and sliding old data over, eventually to be purged or archived. Before you begin, you must choose either a LEFT partition function window or a RIGHT partition function window:
1. How
data is handled varies according to the choice of LEFT or RIGHT partition function window:
• With a LEFT strategy, partition1 holds the oldest data (Q4 data), partition2 holds data that is 6- to 9-months old (Q3), partition3
holds data that is 3- to 6-months old (Q2), and partition4 holds recent data less than 3-months old.
• With a RIGHT strategy, partition4 holds the holds data (Q4), partition3 holds Q3 data, partition2 holds Q2 data, and partition1
holds recent data.
• Following the best practice, make sure there are empty partitions on both the leading edge (partition0) and trailing edge (partition5)
of the partition.
• RIGHT range functions usually make more sense to most people because it is natural for most people to to start ranges at their lowest
value and work upward from there.
2. Assuming
that a RIGHT partition function windows is used, we first use the SPLIT subclause of the ALTER PARTITION FUNCTIONstatement
to split empty partition5 into two empty partitions, 5 and 6.
3. We
use the SWITCH subclause
of ALTER TABLE to
switch out partition4 to a staging table for archiving or simply to drop and purge the data. Partition4 is now empty.
4. We
can then use MERGE to
combine the empty partitions 4 and 5, so that we’re back to the same number of partitions as when we started. This way, partition3 becomes the new partition4, partition2 becomes the new partition3, and partition1 becomes the new partition2.
5. We
can use SWITCH to
push the new quarter’s data into the spot of partition1.
Tip
Use the $PARTITION system
function to determine where a partition function places values within a range of partitions.
Some best practices to consider for using a slide window partition include the following:
• Load newest data into a heap, and then add indexes after the load is finished. Delete oldest data or, when working with very large
data sets, drop the partition with the oldest data.
• Keep an empty staging partition at the leftmost and rightmost ends of the partition range to ensure that the partitions split when
loading in new data, and merge, after unloading old data, do not cause data movement.
• Do not split or merge a partition already populated with data because this can cause severe locking and explosive log growth.
• Create the load staging table in the same filegroup as the partition you are loading.
• Create the unload staging table in the same filegroup as the partition you are deleting.
• Don’t load a partition until its range boundary is met. For example, don’t create and load a partition meant to hold data that is
one to two months older before the current data has aged one month. Instead, continue to allow the latest partition to accumulate data until the data is ready for a new, full partition.
• Unload one partition at a time.
• The ALTER TABLE...SWITCH statement
issues a schema lock on the entire table. Keep this in mind if regular transactional activity is still going on while a table is being partitioned.
Thanks Shiven:) If Answer is Helpful, Please Vote -
Hi Gurus,
One of user is getting problem in uploading catalogs in SRM system - Following are the error logs -
Uploading catalog GLOBAL_ECOMPANYSTORE
Name of file that was uploaded: D:\Profiles\JCAPOZZI\My Documents\BIN\eCompanyStore Content Catalog_datase V54.csv
2007/10/12 18:19:55 (GMTUK): Starting CSV import for catalog GLOBAL_ECOMPANYSTORE
2007/10/12 18:19:56 (GMTUK): 0001 package(s) were processed OK
2007/10/12 18:19:56 (GMTUK): Package 0001 received and saved
2007/10/12 18:19:56 (GMTUK): schema locked; number of lock attempts remaining: 59
2007/10/12 18:20:56 (GMTUK): schema locked; number of lock attempts remaining: 58
2007/10/12 18:21:56 (GMTUK): schema locked; number of lock attempts remaining: 57
2007/10/12 18:22:56 (GMTUK): schema locked; number of lock attempts remaining: 56
2007/10/12 18:23:56 (GMTUK): schema locked; number of lock attempts remaining: 55
2007/10/12 18:24:56 (GMTUK): schema locked; number of lock attempts remaining: 54
2007/10/12 18:25:56 (GMTUK): schema locked; number of lock attempts remaining: 53
2007/10/12 18:26:56 (GMTUK): schema locked; number of lock attempts remaining: 52
2007/10/12 18:27:56 (GMTUK): schema locked; number of lock attempts remaining: 51
2007/10/12 18:28:56 (GMTUK): schema locked; number of lock attempts remaining: 50
2007/10/12 18:29:56 (GMTUK): schema locked; number of lock attempts remaining: 49
2007/10/12 18:30:56 (GMTUK): schema locked; number of lock attempts remaining: 48
2007/10/12 18:31:56 (GMTUK): schema locked; number of lock attempts remaining: 47
2007/10/12 18:32:56 (GMTUK): schema locked; number of lock attempts remaining: 46
2007/10/12 18:33:56 (GMTUK): schema locked; number of lock attempts remaining: 45
2007/10/12 18:34:56 (GMTUK): schema locked; number of lock attempts remaining: 44
2007/10/12 18:35:56 (GMTUK): schema locked; number of lock attempts remaining: 43
2007/10/12 18:36:56 (GMTUK): schema locked; number of lock attempts remaining: 42
2007/10/12 18:37:56 (GMTUK): schema locked; number of lock attempts remaining: 41
2007/10/12 18:38:56 (GMTUK): schema locked; number of lock attempts remaining: 40
2007/10/12 18:39:56 (GMTUK): schema locked; number of lock attempts remaining: 39
2007/10/12 18:40:56 (GMTUK): schema locked; number of lock attempts remaining: 38
2007/10/12 18:41:56 (GMTUK): schema locked; number of lock attempts remaining: 37
2007/10/12 18:42:56 (GMTUK): schema locked; number of lock attempts remaining: 36
2007/10/12 18:43:56 (GMTUK): schema locked; number of lock attempts remaining: 35
2007/10/12 18:44:56 (GMTUK): schema locked; number of lock attempts remaining: 34
2007/10/12 18:45:56 (GMTUK): schema locked; number of lock attempts remaining: 33
2007/10/12 18:46:56 (GMTUK): schema locked; number of lock attempts remaining: 32
2007/10/12 18:47:56 (GMTUK): schema locked; number of lock attempts remaining: 31
2007/10/12 18:48:56 (GMTUK): schema locked; number of lock attempts remaining: 30
Catalog update was terminated
the job is getting terminated.
Please suggest and will surely give points for your responses.
Regards
SridherRaghu,
Carefully review your data to make sure eveything lines up and is appropriate. Then generate a new template and make sure it has all components in it even though you might not use everything. Then use the 'transfer" function to transfer the data from the existing file to the new template.
Try that. Sometimes MDUG will throw strange errors is you reuse files or copy them. Start with a clean template and this will hopefully work. This has worked for me in the past when I'd see strange, never seen before errors.
Thanks.
Matt -
Private Login Section and internal work order Information
H All,
I’d like to create a section on my corporate site where our engineers can login and update customer folders with recent work order information.
I use DW about twice a year so it always feels like I’m starting over each time.
Anyhow, if someone can point me in the direction of creating a private login screen and subsequent pages / forms, for creating such a section, I would greatly appreciate it.
ThanksNot offended....I understand, and most of the time I AM in over my head,
but I always seem to simplify it enough to get it to work.
Looking for something really basic... simple directory scheme locked away
1. I would just like to create a simple login screen
2. Revealing a page with customer names
3. Click on a name and view previous forms or enter work order info. (what the engineer encountered / fixed - that's about it)
I figure I can even do the basic form with something like a viewable/editable pdf file or LiveCycle or equivalent.
I just really need to make sure it is private so the customer can’t just add forward-slash “/their name” to the end of my service page and get in. -
Solution: multiple diags with name efa.dat found
This is a solution to a problem I hit. When I tried to run a diag on the model it would throw the error:
sims: locating diag efa.dat
sims: Looking for diag under $SIMS_LAUNCH_DIR
sims: Caught a SIGDIE. multiple diags with name efa.dat found at /import/dtg-data20/jj155244/OpenSPARCT2/tools/src/sims,1.272 line 4581.
Solution: do not run in the $DV_ROOT directory. Create a subdirectory for the run or run elsewhere.
A efa.dat file is created in the run directory by sims. There is also an efa.dat in $DV_ROOT/verif/diag/assembly/include/efa.dat. sims looks for the efa.dat file starting in the run directory. It finds both the files and complains about finding multiple files.1. Use the index with the lower fragmentation to identify the newly rebuilt index (as it almost always will have lower fragmentation)
2. To reproduce, block the online index rebuild process by trying to alter the table's definition in a transaction (and don't commit, which will place schema lock on the table)
Satish Kartan http://www.sqlfood.com/ -
Hello Friends ,
When I was starting DEV server , the server does not starts
In the process list I found 'Display+work.exe : Dispatcher stopped'
Syslog :
SAP-Basis system : stop sap system,Dispatcher pid 8
SAP-Basis system : Message server disconnected
communications data : SAP gateway was closed
SAP-Basis system: stop workproc 4, PID 1544
SAP-Basis system : Initialization DB-connect Failed , Return code 000256
Database : ORA-28000: the account is locked
Database : Database error 28000 at CON
Database : ORA - 28000: the account is locked
Database : Database error 28000 at CON
Could some one help me to fix this problem...Hi,
The reason for the dispatcher stop might be because of the SAP Schema locked in DB.
SAP-Basis system : Initialization DB-connect Failed , Return code 000256
Database : ORA-28000: the account is locked
Database : Database error 28000 at CON
Try to execute the following command at the SQL prompt and then restart SAP.
Alter user <SAPSCHEMA NAME> account unlock;
Then use BRTools to reset the password of the SAPSChema.
Hope this helps.
Regards,
Varadharajan M -
Hi all,
I tried a lot of techniques to mount MTP devices (such as the latest Android devices without a SD card) automatically when plugged-in. gvfs-gphoto2 kinda worked, but for me it was dreadfully slow, and the only reliable solution, at this time, was to use go-mtpfs. But then it's not automatic, you have to type horribly complicated commands in a terminal! Yuck.
In the PKGBUILD below, a few UDEV rules to fix that. They currently only work with the devices I got (Nexus 7 and Galaxy Nexus, both on CyanogenMod 10), so they definitely won't work on anything else, which is why I didn't submit them right now to the AUR. I just wanted to share them in case anybody was interested. And since it's my first PKGBUILD (hurray!), do share any tips you might have regarding, oh, I don't know, something I might have forgotten, a rule I didn't know about, or something that could be done better.
Cheers!
Package source: https://github.com/fxthomas/android-aut … ter.tar.gzThanks for you Package, I was trying to use go-mtpfs and a single rules file (adapted from http://bernaerts.dyndns.org/linux/247-u … exus7-mtp) in /etc/udev/rules.d without success.
I've made a few modifications based on http://hackaday.com/2009/09/18/how-to-write-udev-rules/ in your /usr/bin/mtp file to have mount and unmount notifications under Gnome Shell.
There's a problem, unmount notifications doesn't work, if you have an idea...
here is the modified file:
#!/bin/bash
# Base Script File (android-mtp.sh)
# Created: Tue 04 Dec 2012 06:44:50 PM CET
# Version: 1.0
# Author: François-Xavier Thomas <[email protected]>
# This Bash script was developped by François-Xavier Thomas.
# You are free to copy, adapt or modify it.
# If you do so, however, leave my name somewhere in the credits, I'd appreciate it ;)
GO_MTPFS=/usr/bin/go-mtpfs
DEVICE_NAME=${2//_/ }
GSuser=$(ps -ef | grep -w /usr/bin/gnome-shell | grep -v grep | awk '{print $1'})
GSpid=$(ps -ef | grep -w /usr/bin/gnome-shell | grep -v grep | awk '{print $2'})
DBUS_SESSION_BUS_ADDRESS=`grep -z DBUS_SESSION_BUS_ADDRESS /proc/$GSpid/environ | sed -e 's/DBUS_SESSION_BUS_ADDRESS=//'`
case $1 in
start|mount)
echo "Mounting MTP device on /media/$DEVICE_NAME"
/bin/mkdir -p "/media/$DEVICE_NAME"
/usr/sbin/daemonize -l /var/lock/go-mtpfs.$2.lock /usr/bin/go-mtpfs -allow-other=true "/media/$DEVICE_NAME"
if [ -r "/home/$GSuser/Images/$DEVICE_NAME.png" ]
then
sudo -b -u $GSuser DBUS_SESSION_BUS_ADDRESS=$DBUS_SESSION_BUS_ADDRESS notify-send --hint=int:transient:1 -t 6000 "$DEVICE_NAME monté sous /media/$DEVICE_NAME" -i "/home/$GSuser/Images/$DEVICE_NAME.png"
else
sudo -b -u $GSuser DBUS_SESSION_BUS_ADDRESS=$DBUS_SESSION_BUS_ADDRESS notify-send --hint=int:transient:1 -t 6000 "$DEVICE_NAME monté sous /media/$DEVICE_NAME"
fi
stop|unmount)
echo "Unmounting MTP device on /media/$DEVICE_NAME"
/bin/umount "/media/$DEVICE_NAME"
/bin/rmdir "/media/$DEVICE_NAME"
if [ -r "/home/$GSuser/Images/$DEVICE_NAME.png" ]
then
sudo -b -u $GSuser DBUS_SESSION_BUS_ADDRESS=$DBUS_SESSION_BUS_ADDRESS notify-send --hint=int:transient:1 -t 6000 "$DEVICE_NAME démonté" -i "/home/$GSuser/Images/$DEVICE_NAME.png"
else
sudo -b -u $GSuser DBUS_SESSION_BUS_ADDRESS=$DBUS_SESSION_BUS_ADDRESS notify-send --hint=int:transient:1 -t 6000 "$DEVICE_NAME démonté"
fi
echo "Usage: android-mtp start device-name"
esac
Another problem, if the device is locked (I'm using the schema lock) it will not mount (it try to mount, fail, then unmount), Is there a way to send a notification to the user telling him to disconnect, unlock and retry? I'm anable to find a way to do this.
Last edited by farnsworth (2013-01-13 12:38:12) -
Deadlocks and with (nolock) hint
Hi we run std 2008 r2. Can a query with the hint "with (nolock)" on all tables ever be involved in a deadlock?
the documentation says schema locks (sch-s) don't block other queries. So unless sql overrides the hint, I'm not understanding. If it can only happen when sql overrides the hint, I'd like to understand when sql might make that decision.
Just for your reference.
http://www.sqlservercentral.com/Forums/Topic842499-146-1.aspx
Please note that in some cases Optimizer might think to behave differently even when query hints are used. You cannot always say optimizer would only make decision based on Query hint. Although its highly unlikely for NOLOCK to cause deadlock but it can.
I would not argue about definition of deadlock it does not matter here.
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP
Maybe you are looking for
-
So with three separate builds of WTP I've run the PowerShell command to remove Metro/Modern/Universal/Windows Apps. With each build, it also removes the Start Menu. In the thread below, an MS Support person mentioned that that command was only for Wi
-
I am trying to fix a friend's eMac. It was working fine until the name of the administrator was changed incorrectly in Systems Accounts. Upon restart the computer failed to mount. I tried to reinstall Tiger and could not. The destination to do this i
-
Navigation with Short URLs for T.Code Published in the portal()
Dear Expert We have the following issue: Requeriment: We created a report ABAP in the R/3. For this Report, the ABAP Team created a t.code ZPORTAL. For this t.code we created a service ITS in the t.code SICF, the name of this service is ZPORTAL. In t
-
Want to write such a query....
hi guys, i have a table called purchase_detail, table have many fields but the fields related to my query are 2, unit_serial_no and category_code. Like single unit_serial_no can have multiple category_code for example: unit_serial_no category_code 1
-
Vendor Evaluation - New Subcriteria in Quality
Hi, We want to go for new subcriteria in quality called "vendor returns" which should be calculated based on returned qty out of delivered qty by the user through User Exit. 1 Which table will I be getting total delivered qty and returned qty for a v