VPD vs Multi Schema vs Partitioning
Hi Oracle Gurus,
We will develope a web based ERP application for a company with multi branch (about 20), with the following requirement :
- a normal user can only see and update data for his branch
- some users from head quarters can see ALL data from ALL branch
- reporting will be per branch, but there should be Consolidation reports
Total user will be 200, but maximum concurrent is 60 users.
We will use JSF(ADF Faces) + BC4J, one database server and one apps server.
The question is :
To meet the requirement above about which user can see which data, what Oracle feature should I use :
is it VPD ?
or should I just use different schema for each branch ?
or use Partitioning ?
Thank you for your recommendation,
xtanto
Xtanto
1) Using partitioning won't in itself help with the security question.
- It may help with performance (or it may well not).
2) A different schema for each branch
- is simple and scalable for branch applications,
- but is a total pain in the neck for consolidation (you will create lots of reporting UNION ALL views)
- and it makes it harder to deal with session pooling in the Java application tier if that ever becomes necessary (because you need a separate session pool for each branch, and one for the head office).
- and it works the database harder (every distinct SQL statement in the SGA is multiplied by the number of branches
- and it makes datamodel maintenance 20 times more tedious
- and you have to decide which data is 'shared' (eg employees? cost codes? customers?) and which are owned by the branch. Some data may be visible to everyone, but 'owned' by a particular branch (or equally likely, by a particular function like HR, Buying, whatever)
3) I have no personal experience of VPD itself. But I'd go for VPD or failing that for old-fashioned 'roll your own' application security (which is often implemented more or less the same as VPD, but at a higher cost to the developer). This gives you the most flexibility; it makes it possible for different users to have overlapping views of the data (eg if you add a regional structure between branches and head office, they may need to see several but not all branches).
Because you have a small app (only 60 users, only 20 branches) some of the downsides of using separate schemas are not such a big thing - but even at 20 branches, the union views will get very unwieldy... And for sure, all organisations change shape over time - don't assume that today's structure will still be in place in 12 months time!
My 2d, HTH
Regards Nigel
Similar Messages
-
Addint a child domain process hangs in Replicating the schema directory partition
Hello everyone,
for practice proposes and exam preparations I have my own virtual private network setup on an PowerEdge R905 Machine (which is a beast) I have two networks and windows server 2008R2 on a DMZ zone setup as router to rout traffic between two of my networks.
My two networks are 192.168.10.0 - and 192.168.20.0. the 10 network has its own active directory setup, now on my 20 network I am trying to deploy a child domain. during the process everything is going just fine BUT the process of promoting the domain gets
stuck on Replicating The Schema Directory Partition. Can anyone tell me what the issue might be ? I tried everything that I could think of such as:
made sure the 20 network server is pointed to the DNS on the 10th server.
you can ping the IP address and the FQDN of 10 network from the 20 network.
I made sure all firewalls are disabled on both networks
on my 10 network I have created sites and assigned the right subnets for each site
so please any hint and explanation is greatly appreciatedIf firewalls are disabled between the 2 subnets then you are sure that all of the below ports are opened:
Client Port(s)
Server Port
Service
49152 -65535/UDP
123/UDP
W32Time
49152 -65535/TCP
135/TCP
RPC Endpoint Mapper
49152 -65535/TCP
464/TCP/UDP
Kerberos password change
49152 -65535/TCP
49152-65535/TCP
RPC for LSA, SAM, Netlogon (*)
49152 -65535/TCP/UDP
389/TCP/UDP
LDAP
49152 -65535/TCP
636/TCP
LDAP SSL
49152 -65535/TCP
3268/TCP
LDAP GC
49152 -65535/TCP
3269/TCP
LDAP GC SSL
53, 49152 -65535/TCP/UDP
53/TCP/UDP
DNS
49152 -65535/TCP
49152 -65535/TCP
FRS RPC (*)
49152 -65535/TCP/UDP
88/TCP/UDP
Kerberos
49152 -65535/TCP/UDP
445/TCP
SMB
49152 -65535/TCP
49152-65535/TCP
DFSR RPC (*)
Then make sure that the other subnet is across route not across NAT to avoid a lot of additional configurations.
Regards,
Housam Smadi -
Multi Schema to single schema.
Hi,
I am new to streams.
Can it be possible to push the changes from multi schemas to a single schema? Structure for the consolidated schema will be same as other source schemas. Juast an additional column in target tables. I would like populate a unique id for each schema.
Any example is much appreciated.
Thanks.It is possible. But you have to change the schema name in the changes by using a dml-handler or a transformation rule. For a dml handler I have an example:
CREATE OR REPLACE PROCEDURE emp_dml_handler(in_any IN SYS.AnyData) IS
lcr SYS.LCR$_ROW_RECORD;
rc PLS_INTEGER;
command VARCHAR2(10);
old_values SYS.LCR$_ROW_LIST;
BEGIN
-- Access the LCR
rc := in_any.GETOBJECT(lcr);
-- Get the object command type
command := lcr.GET_COMMAND_TYPE();
-- Check for DELETE command on the employees table
IF command = 'DELETE33' THEN
-- Set the command_type in the row LCR to INSERT
lcr.SET_COMMAND_TYPE('INSERT');
-- Set the object_name in the row LCR to EMP_DEL
lcr.SET_OBJECT_NAME('DESTINATION.EMPLOYEES');
-- Get the old values in the row LCR
old_values := lcr.GET_VALUES('old');
-- Set the old values in the row LCR to the new values in the row LCR
lcr.SET_VALUES('new', old_values);
-- Set the old values in the row LCR to NULL
lcr.SET_VALUES('old', NULL);
-- Add a SYSDATE value for the timestamp column
-- lcr.ADD_COLUMN('new', 'TIMESTAMP', SYS.AnyData.ConvertDate(SYSDATE));
-- Apply the row LCR as an INSERT into the emp_del table
lcr.EXECUTE(true);
END IF;
END;
BEGIN
DBMS_APPLY_ADM.SET_DML_HANDLER(
object_name => 'source.employees',
object_type => 'TABLE',
operation_name => 'INSERT',
error_handler => false,
user_procedure => 'strm_emp_admin.emp_dml_handler',
apply_database_link => NULL,
apply_name => NULL);
END;
Regards,
Martien -
How to simply remove Partition Schemes or Partition Functions
Hello,
Did anyone can instruct how to remove the Partition Schemes & related Partition Functions?
When try to remove those Partition, it give out a error 7717 and it can see there are many related dependencies tables. However, I am no idea how to remove it.
Thanks in advance.You should find all related object first, in order to remove the Partition
https://social.technet.microsoft.com/Forums/sharepoint/en-US/41418b34-d000-49de-8074-9b662b0a3013/deleting-an-incomplete-partition?forum=sqldatabaseengine
Ronen Ariely
[Personal Site] [Blog] [Facebook] -
Multi schema versus single schema set up
Hi,
I have a question regarding the environment set up of an Oracle datawarehouse regarding multischema versus single schema set up. My database has several schemas on it representing the functional areas of the warehouse. I have installed the Runtime Repository on the Database and registered each target schema with it. Now that I have started developing, I have noticed the way in which OWB builds database links for mappings which have source and target objects residing in different schemas. I therefore have a major concern regarding the performance of such a set up as opposed to one with a single schema. If anyone has had any experience of working with both environments please could they advise me.
Your comments will be most appropriate.
Take care
MiteshThe requirement of sigle or multi schema is driven by the business requirement and it is a policy descion than a techical one.
Depending upon requirement you can have not only multiple schemas but also multiple instances or databses or server for OLTP , Staging and Star
Normally for good performance staging and star (which means one for each layer) a schema is created as it offers good performance over single schema set up. -
Hello,
Please forgive me if this is an elementary question...I'm trying to run a query against multiple schema's but it does not work. I've Associated both schema's with my workspace and I tried running it in the SQL Workshop and it runs fine there. When I create a report page and specify the query there it tells me that table/view does not exist. I also tried building it using the query builder utility, the second schema name does not appear on the drop down list at the top right of the page.
Can anyone help out with this?
Thanks!!!Hello:
Generally, if a query runs from SQL Workshop you should be able to use the query in an APEX report.
Check if making an explicit grant on the table in the other schema to the default schema of your application makes a difference.
Example: If SchemaA and SchemaB are the two schemas allowed for your workspace and if TableA exists in SchemaA and TableB exists in SchemaB
Grant select on SchemaA.TableA to SchemaB
Grant select on SchemaB.TableB to SchemaAVarad -
Question about using Sun Multi-Schema XML Validator
Hi all,
I tried to use sun MSV to validate xml against my schema file, the result turns strange. To simply test the validator, I use the following xml file and xsd file:
------- test.xml -------------------------------------------------
<?xml version="1.0" encoding="ISO-8859-1"?>
<persons xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="test.xsd">
<person>1</person>
<person>2</person>
<person>3</person>
</persons>
--------test.xsd ---------------------------------------------------
<?xml version="1.0" encoding="ISO-8859-1"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
elementFormDefault="qualified">
<xs:element name="persons">
<xs:complexType>
<xs:sequence>
<xs:element name="person" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
In my knowledge of xml schema, the above xml file should be validated "ok" against the xsd files.
But in sun msv validator, it reports the following error:
------------ error message ---------------------------------------------------
Error of test.xml
element "person" was found where no element may occur
Why does this happen? I have defined the occurrence of "person" element under "persons" to unbounded. What is the possible wrong?
Thanks in advance!Problem sovled by updating the msv lib files. Thx for noticing!
-
Hanging on Multi-threaded DB Partition to Oracle -cnt'd
Hi All,
Thanks for everyones replies to my plaintive calls for help.
Our problem was a hang of the "OpenCursor" call during dynamic
sql execution from Forte 3.0.M.0 to Oracle on NT.
The most useful suggestions included:
* Check for Errors in the SQL
- no errors found. SQL works fine from other clients
* Transaction around the entire piece of code
- already there.
* Ensure that the following line is used:
self.Session.RemoveStatement(statementHandle=dynStatement);
- it was already there.
* Try turning on log flags
Useful Log flags that might be helpful are
trc:db:1 - Cursor operations
trc:db:1:100 - Vendor specific cursor operation tracing
trc:db:1:200 - Checking cursor queues, fetching rows, cursor count
- we still get stuck in the OpenCursor, with no useful
information shown to see where the thread is stuck.
* Increase Cursor Count in Oracle - this was shown
not to be a problem. We did a separate test opening cursors
without closing them, and indeed hit the Oracle cursor limit,
but with this code we did not get back such an exception.
* We also set up tracing within Oracle, and the application proved to
be hanging inside Forte - before it had apparently reached Oracle.
* And a dumpstatus on the partition showed us stuck in the OpenCursor.
* It seems that indeed this problem is one with Forte and its
multithreaded DB Libraries, so our only soution was to revert back
to the traditional partition-based replication of single threaded
DB partitions. It only took a minute to revert back, but this was
disappointing from a hardware resources point of view. We have
not seen the problem again since making this change.
* One other question that came up - indeed 3.0.M.0 is a beta release,
and we are looking to move to 3.0.M.2 shortly. We had to move to M
quickly to resolve an ISAPI problem in L, but plan to move to M.2
as soon as possible.
Thanks again for all the replies!
Ben Macdonald
Xpedior, Inc@Sybrand - Funny you should mention Larry's boat - it happens to be moored about 200 metres from where I'm sitting here in Cape Town :-)
And yes, I will get in touch with sales for more clarity...
Thanks,
Andreas
*** Update
I called our 'Senior License Management Service Consultant' at Oracle and she confirms that Oracle do NOT take processor multithreading into consideration for licensing purposes.
So, our single quad core, multithreaded processor will be licensed as 4 cores, so will require 4 x 0.5 = 2 CPU licenses.
I guess Larry will have to drink cheaper champagne ;-)
Edited by: Andreas Hess on May 20, 2010 2:41 PM -
Multi schema same table structure
I have about 90 schemas with the same tables structures.
I need to create on folder for one table in Disco. admin that loops for these schemas. I dont want to use view of schemas.
Note schemas are created every period of time.
how I can do that?I have about 90 schemas with the same tables structures.
I need to create on folder for one table in Disco. admin that loops for these schemas. I dont want to use view of schemas.
Note schemas are created every period of time.
how I can do that? -
Okay so I'm new to Archlinux and I'd like you guys to help me out picking up a partition scheme.
I'll be using a 250gb hard drive and the system has 2GB of memory.
I don't think I have a need for any special security measures so I've always used a pretty simple partition scheme (one partition for swap and another one for /), but if there are benefits in using a more complex partition scheme I'm open to trying it. I'd also like to know what file system to use on each partition.
Sorry if my English isn't perfect but it is not my main language, also sorry to bother with such a newbie question but this is the newbie forum after all.beat wrote:
/ 20GB ext3
/swap 2GB
/var 10GB ReiserFS
/home ~rest ext3
Does this look good?
I see a lot of people also have a different partition for /boot (usually formatted as ext2), what are the advantages in doing so? Since I'm not going to multi-boot should I do it? How big should this partition be?
Also I'm a bit afraid of trying ext4, are there noticeable performance gains over ext3? I can't risk losing some data so I must be sure it is safe.
Looks fine but I'd use ext4 if I were you. Answers to your questions: a different /boot is good if you have multiple Linux partitions and you want to manually manage your grub menu.lst; you can add entries to chainload the native grub screens of each of the respective Linux root partitions. Honestly, if you only have one Linux root partition, there isn't a really big reason to have your own /boot in my opinion. My system has a Windows partition and three Linux partitions; my /boot is 20 megs (7 is used). There are noticeable performance gains over using ext3, yes. Google around and search these forums for details. Ext4 is safe so long as the software you're using on it is well written. You can feel pretty comfortable using it; it will become the defacto workstation filesystem in the near future (my opinion). The 2.6.30 kernel is rumored to have a number of tweaks to make ext4 'safer' for some poorly coded software.
Sorry the above are so general and not referenced, I'm in a rush right now. Maybe others can elaborate to totally berate me
Last edited by graysky (2009-05-25 11:37:50) -
Unable to change to GUID partition scheme so Snow Leopard can be installed
Upgrading from OS 10.5.8 to Snow Leopard 10.6.3 on Macbook 4.1 with Intel Core 2 Duo, 2GB RAM. Instructions state "Macintosh HD" can't be used because it does not use the GUID partition Table Scheme. Use Disk Utility to change the partition scheme. Select the disk, use the Partition Tab, select the Volume Scheme, and then select Options.
Problem: The Options button is grayed out. The Partition Map Scheme is Apple Partition Map.
Note: Macbook purchased in 2008, but the hard drive was replaced in April 2011. Hard drive Information:
Name :
TOSHIBA MK1655GSXF Media
Type :
Disk
Partition Map Scheme :
Apple Partition Map
Disk Identifier :
disk0
Media Name :
TOSHIBA MK1655GSXF Media
Media Type :
Generic
Connection Bus :
Serial ATA 2
How do I change the partition scheme to GUID?This requires that you repartition the entire drive as follows:
Drive Preparation
1. Boot from your OS X Installer Disc. After the installer loads select your language and click on the Continue button. When the menu bar appears select Disk Utility from the Utilities menu.
2. After DU loads select your hard drive (this is the entry with the mfgr.'s ID and size) from the left side list. Note the SMART status of the drive in DU's status area. If it does not say "Verified" then the drive is failing or has failed and will need replacing. SMART info will not be reported on external drives. Otherwise, click on the Partition tab in the DU main window.
3. Under the Volume Scheme heading set the number of partitions from the drop down menu to one. Click on the Options button, set the partition scheme to GUID then click on the OK button. Set the format type to Mac OS Extended (Journaled.) Click on the Partition button and wait until the process has completed.
4. At this point you can quit DU and return to the installer and install Snow Leopard.
If you have data you wish to save then you need to backup before doing the above. -
Losing disk partition after choosing startup disk
Hi,
I'm using MB Pro retina 2012, I have 3 disk partitions. 1 for Mac osx, 1 for windows (bothcamp) and 1 for data (ExFAT). I using it normally until I change startup disk to Windows. After that, data drive is not show anymore. I have more than 100GB important data on this drive. What happend? How to restore them? I cant repair disk, it doesnt work for me. It show like this on disk utility.
I've tried to mount it, repair, verify but not successful. This is the log:
2013-07-18 15:39:32 +0700: Disk Utility started.
2013-07-18 15:44:53 +0700: Verifying partition map for “APPLE SSD SM512E Media”
2013-07-18 15:44:53 +0700: Starting verification tool:
2013-07-18 15:44:53 +0700: Checking prerequisites
2013-07-18 15:44:53 +0700: Checking the partition list
2013-07-18 15:44:53 +0700: Checking for an EFI system partition
2013-07-18 15:44:53 +0700: Checking the EFI system partition’s size
2013-07-18 15:44:53 +0700: Checking the EFI system partition’s file system
2013-07-18 15:44:53 +0700: Checking all HFS data partition loader spaces
2013-07-18 15:44:53 +0700: Checking Core Storage Physical Volume partitions
2013-07-18 15:44:53 +0700: The partition map appears to be OK
2013-07-18 15:44:53 +0700:
2013-07-18 15:44:53 +0700:
2013-07-18 15:44:55 +0700: Verifying and repairing partition map for “APPLE SSD SM512E Media”
2013-07-18 15:44:55 +0700: Starting repair tool:
2013-07-18 15:44:55 +0700: Checking prerequisites
2013-07-18 15:44:55 +0700: Checking the partition list
2013-07-18 15:44:55 +0700: Adjusting partition map to fit whole disk as required
2013-07-18 15:44:55 +0700: Checking for an EFI system partition
2013-07-18 15:44:55 +0700: Checking the EFI system partition’s size
2013-07-18 15:44:55 +0700: Checking the EFI system partition’s file system
2013-07-18 15:44:55 +0700: Checking all HFS data partition loader spaces
2013-07-18 15:44:55 +0700: Reviewing boot support loaders
2013-07-18 15:44:55 +0700: Checking Core Storage Physical Volume partitions
2013-07-18 15:44:55 +0700: Updating Windows boot.ini files as required
2013-07-18 15:44:55 +0700: The partition map appears to be OK
2013-07-18 15:44:55 +0700:
2013-07-18 15:44:55 +0700:
2013-07-18 15:51:33 +0700: Disk Utility started.
2013-07-18 15:59:19 +0700: Disk Utility started.
2013-07-18 15:59:36 +0700: Preparing to remove partition from disk: “APPLE SSD SM512E Media”
2013-07-18 15:59:36 +0700: Partition Scheme: GUID Partition Table
2013-07-18 15:59:36 +0700: 1 partition will be removed
2013-07-18 15:59:36 +0700: 1 partition will not be changed
2013-07-18 15:59:36 +0700:
2013-07-18 15:59:36 +0700: Partition 1
2013-07-18 15:59:36 +0700: Name : “Mac”
2013-07-18 15:59:36 +0700: Size : 101.93 GB
2013-07-18 15:59:36 +0700: File system : Mac OS Extended (Journaled)
2013-07-18 15:59:36 +0700: Do not erase contents
2013-07-18 15:59:36 +0700:
2013-07-18 15:59:36 +0700: Partition 2
2013-07-18 15:59:36 +0700: Size : 398 GB
2013-07-18 15:59:36 +0700: File system : Free Space
2013-07-18 15:59:36 +0700:
2013-07-18 15:59:36 +0700: Beginning partition operations
2013-07-18 15:59:36 +0700: Unmounting disk
2013-07-18 15:59:36 +0700: Finishing partition modifications
2013-07-18 15:59:36 +0700: Waiting for the disks to reappear
2013-07-18 15:59:36 +0700: Partition complete.
2013-07-18 15:59:36 +0700:
2013-07-18 15:59:42 +0700: Preparing to partition disk: “APPLE SSD SM512E Media”
2013-07-18 15:59:42 +0700: Partition Scheme: GUID Partition Table
2013-07-18 15:59:42 +0700: 1 partition will be created
2013-07-18 15:59:42 +0700:
2013-07-18 15:59:42 +0700: Partition 1
2013-07-18 15:59:42 +0700: Name : “Mac”
2013-07-18 15:59:42 +0700: Size : 499.93 GB
2013-07-18 15:59:42 +0700: File system : Mac OS Extended (Journaled)
2013-07-18 15:59:42 +0700: Do not erase contents
2013-07-18 15:59:42 +0700:
2013-07-18 15:59:42 +0700: Beginning partition operations
2013-07-18 15:59:42 +0700: Verifying the disk
2013-07-18 15:59:42 +0700: Checking file system
2013-07-18 15:59:42 +0700: Performing live verification.
2013-07-18 15:59:42 +0700: Checking Journaled HFS Plus volume.
2013-07-18 15:59:42 +0700: Checking extents overflow file.
2013-07-18 15:59:42 +0700: Checking catalog file.
2013-07-18 15:59:53 +0700: Checking multi-linked files.
2013-07-18 15:59:53 +0700: Checking catalog hierarchy.
2013-07-18 15:59:53 +0700: Checking extended attributes file.
2013-07-18 15:59:53 +0700: Checking volume bitmap.
2013-07-18 15:59:53 +0700: Checking volume information.
2013-07-18 15:59:53 +0700: The volume Mac appears to be OK.
2013-07-18 15:59:53 +0700: Unmounting disk
2013-07-18 15:59:53 +0700: Finishing partition modifications
2013-07-18 15:59:53 +0700: Waiting for the disks to reappear
2013-07-18 15:59:53 +0700: Growing disk
2013-07-18 15:59:54 +0700: Partition complete.
2013-07-18 15:59:54 +0700:
2013-07-18 16:36:53 +0700: Disk Utility started.
2013-07-18 16:37:07 +0700: Preparing to partition disk: “APPLE SSD SM512E Media”
2013-07-18 16:37:07 +0700: Partition Scheme: GUID Partition Table
2013-07-18 16:37:07 +0700: 3 partitions will be created
2013-07-18 16:37:07 +0700:
2013-07-18 16:37:07 +0700: Partition 1
2013-07-18 16:37:07 +0700: Name : “Mac”
2013-07-18 16:37:07 +0700: Size : 100 GB
2013-07-18 16:37:07 +0700: File system : Mac OS Extended (Journaled)
2013-07-18 16:37:07 +0700: Do not erase contents
2013-07-18 16:37:07 +0700:
2013-07-18 16:37:07 +0700: Partition 2
2013-07-18 16:37:07 +0700: Name : “Mac 2”
2013-07-18 16:37:07 +0700: Size : 298.93 GB
2013-07-18 16:37:07 +0700: File system : Mac OS Extended (Journaled)
2013-07-18 16:37:07 +0700:
2013-07-18 16:37:07 +0700: Partition 3
2013-07-18 16:37:07 +0700: Name : “BOOTCAMP”
2013-07-18 16:37:07 +0700: Size : 101 GB
2013-07-18 16:37:07 +0700: File system : Windows NT File System (NTFS)
2013-07-18 16:37:07 +0700: Do not erase contents
2013-07-18 16:37:07 +0700:
2013-07-18 16:37:07 +0700: Beginning partition operations
2013-07-18 16:37:07 +0700: Verifying the disk
2013-07-18 16:37:07 +0700: Checking file system
2013-07-18 16:37:07 +0700: Performing live verification.
2013-07-18 16:37:07 +0700: Checking Journaled HFS Plus volume.
2013-07-18 16:37:07 +0700: Checking extents overflow file.
2013-07-18 16:37:18 +0700: Checking catalog file.
2013-07-18 16:37:19 +0700: Checking multi-linked files.
2013-07-18 16:37:19 +0700: Checking catalog hierarchy.
2013-07-18 16:37:19 +0700: Checking extended attributes file.
2013-07-18 16:37:19 +0700: Checking volume bitmap.
2013-07-18 16:37:19 +0700: Checking volume information.
2013-07-18 16:37:19 +0700: The volume Mac appears to be OK.
2013-07-18 16:37:19 +0700: Shrinking the disk
2013-07-18 16:37:19 +0700: Unmounting disk
2013-07-18 16:37:19 +0700: Finishing partition modifications
2013-07-18 16:37:19 +0700: Waiting for the disks to reappear
2013-07-18 16:37:20 +0700: Formatting disk0s5 as Mac OS Extended (Journaled) with name Mac 2
2013-07-18 16:37:21 +0700: Initialized /dev/rdisk0s5 as a 278 GB HFS Plus volume with a 24576k journal
2013-07-18 16:37:21 +0700: Mounting disk
2013-07-18 16:37:21 +0700: Partition complete.
2013-07-18 16:37:21 +0700:
2013-07-18 16:37:35 +0700: Preparing to erase : “Data”
2013-07-18 16:37:35 +0700: Partition Scheme: GUID Partition Table
2013-07-18 16:37:35 +0700: 1 volume will be erased
2013-07-18 16:37:35 +0700: Name : “Data”
2013-07-18 16:37:35 +0700: Size : 298.8 GB
2013-07-18 16:37:35 +0700: File system : ExFAT
2013-07-18 16:37:35 +0700: Unmounting disk
2013-07-18 16:37:35 +0700: Erasing
2013-07-18 16:37:35 +0700: Volume name : Data
Partition offset : 195984280 sectors (100343951360 bytes)
Volume size : 583593064 sectors (298799648768 bytes)
Bytes per sector : 512
Bytes per cluster: 131072
FAT offset : 2048 sectors (1048576 bytes)
# FAT sectors : 18432
Number of FATs : 1
Cluster offset : 20480 sectors (10485760 bytes)
# Clusters : 2279580
Volume Serial # : 51e7b75f
Bitmap start : 2
Bitmap file size : 284948
Upcase start : 5
Upcase file size : 5836
Root start : 6
2013-07-18 16:37:35 +0700: Mounting disk
2013-07-18 16:37:35 +0700: Erase complete.
2013-07-18 16:37:35 +0700:
2013-11-07 13:52:44 +0700: Disk Utility started.
2013-11-07 14:04:32 +0700:
Name : disk0s3
Type : Partition
Disk Identifier : disk0s3
Mount Point : Not mounted
File System : MS-DOS (FAT)
Connection Bus : SATA
Device Tree : IODeviceTree:/PCI0@0/SATA@1F,2/PRT0@0/PMP@0
Writable : Yes
Capacity : 298.8 GB (298,799,648,768 Bytes)
Owners Enabled : No
Can Turn Owners Off : No
Can Be Formatted : Yes
Bootable : No
Supports Journaling : No
Journaled : No
Disk Number : 0
Partition Number : 3
2013-11-07 14:26:36 +0700: Disk Utility started.
2013-11-07 14:26:47 +0700: Verify and Repair volume “disk0s3”
2013-11-07 14:26:47 +0700: Starting repair tool:
2013-11-07 14:26:47 +0700: Checking file system2013-11-07 14:26:47 +0700: ** /dev/disk0s3
2013-11-07 14:26:47 +0700: Invalid sector size: 0
2013-11-07 14:26:47 +0700: Volume repair complete.2013-11-07 14:26:47 +0700: Updating boot support partitions for the volume as required.2013-11-07 14:26:47 +0700: Error: Disk Utility can’t repair this disk. Back up as many of your files as possible, reformat the disk, and restore your backed-up files.2013-11-07 14:26:47 +0700:
2013-11-07 14:26:47 +0700: Disk Utility stopped repairing “disk0s3”: Disk Utility can’t repair this disk. Back up as many of your files as possible, reformat the disk, and restore your backed-up files.
2013-11-07 14:26:47 +0700:
2013-11-07 14:27:21 +0700: Verifying volume “disk0s3”
2013-11-07 14:27:21 +0700: Starting verification tool:
2013-11-07 14:27:21 +0700: Checking file system2013-11-07 14:27:21 +0700: ** /dev/disk0s3
2013-11-07 14:27:21 +0700: Invalid sector size: 0
2013-11-07 14:27:21 +0700: Error: This disk needs to be repaired. Click Repair Disk.2013-11-07 14:27:21 +0700:
2013-11-07 14:27:21 +0700: Disk Utility stopped verifying “disk0s3”: This disk needs to be repaired. Click Repair Disk.
2013-11-07 14:27:21 +0700:
2013-11-07 14:27:30 +0700: Verify and Repair volume “disk0s3”
2013-11-07 14:27:30 +0700: Starting repair tool:
2013-11-07 14:27:30 +0700: Checking file system2013-11-07 14:27:30 +0700: ** /dev/disk0s3
2013-11-07 14:27:30 +0700: Invalid sector size: 0
2013-11-07 14:27:30 +0700: Volume repair complete.2013-11-07 14:27:30 +0700: Updating boot support partitions for the volume as required.2013-11-07 14:27:30 +0700: Error: Disk Utility can’t repair this disk. Back up as many of your files as possible, reformat the disk, and restore your backed-up files.2013-11-07 14:27:30 +0700:
2013-11-07 14:27:30 +0700: Disk Utility stopped repairing “disk0s3”: Disk Utility can’t repair this disk. Back up as many of your files as possible, reformat the disk, and restore your backed-up files.
2013-11-07 14:27:30 +0700:
2013-11-07 14:32:17 +0700:
Name : APPLE SSD SM512E Media
Type : Disk
Partition Map Scheme : GUID Partition Table
Disk Identifier : disk0
Media Name : APPLE SSD SM512E Media
Media Type : Generic
Connection Bus : SATA
Device Tree : IODeviceTree:/PCI0@0/SATA@1F,2/PRT0@0/PMP@0
Writable : Yes
Ejectable : No
Location : Internal
Solid State Disk : Yes
Total Capacity : 500.28 GB (500,277,790,720 Bytes)
Disk Number : 0
Partition Number : 0
S.M.A.R.T. Status : Verified
Raw Read Error : 000000000000
Reallocated Sector Count : 000000000000
Power On Hours : 0000000003B2
Power Cycle : 000000000DD0
Temperature : 004F00040027
UDMA CRC Error (PATA only) : 000000000000
2013-11-07 14:39:58 +0700:
Name : disk0s3
Type : Partition
Disk Identifier : disk0s3
Mount Point : Not mounted
File System : MS-DOS (FAT)
Connection Bus : SATA
Device Tree : IODeviceTree:/PCI0@0/SATA@1F,2/PRT0@0/PMP@0
Writable : Yes
Capacity : 298.8 GB (298,799,648,768 Bytes)
Owners Enabled : No
Can Turn Owners Off : No
Can Be Formatted : Yes
Bootable : No
Supports Journaling : No
Journaled : No
Disk Number : 0
Partition Number : 3
2013-11-07 14:43:11 +0700: Disk Utility started.
Thanks for helpYou CAN'T have 3 partitions on your Mac and have the Windows install work. Windows only allows 4 Primary partitions on any one physical hard drive. Since you created a exFAT partition you have past that Windows limit.
One partition for OS X, one for Windows, one for the OS X Recovery HD and the forth for the EFI. Now that you have created a so called Data partition windows will no longer boot.
You are basically TOAST and might need to start over with a total wipe, Re-Partitioning, of the drive then reinstall OS X and you programs and files then reinstall Windows program and files and then do not try fooling with the partitions again, any of them on the OS X or Windows side, because if you do Windows again will not boot. -
SharePoint Foundation 2013 - Multi-tenant Install and OneDrive for Business with Yammer i
Hello,
After installing SP Foundation 2013 (SP1) with Partitioned service applications we have noticed that while clicking on the "yammer and oneDrive" link the below error message comes up:
_admin/yammerconfiguration.aspx
any ideas??
http://technet.microsoft.com/en-us/library/dn659286%28v=office.15%29.aspx
we have also noticed that MS mentioned "OneDrive for Business with Yammer integration doesn’t work for multi-tenancy or partitioned service applications for on-premises deployments"
jaULS
Application error when access /_admin/cloudconfiguration.aspx, Error=Object reference not set to an instance of an object. at Microsoft.SharePoint.WebControls.SPPinnedSiteTile.OnInit(EventArgs e) at System.Web.UI.Control.InitRecursive(Control
namingContainer) at System.Web.UI.Control.InitRecursive(Control namingContainer) at System.Web.UI.Control.InitRecursive(Control namingContainer) at System.Web.UI.Control.InitRecursive(Control
namingContainer) at System.Web.UI.Control.InitRecursive(Control namingContainer) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)
=====
To me it seems SharePoint social networking features require the full SharePoint Server product AND are not available with the free SharePoint Foundation, If correct then why MS punching it here in Foundation without a friendly error message..
ja -
Partition of hard drive won't work...
I currently have a WD MyBook with the following info
Name : WD My Book 1110 Media
Type : Disk
Partition Map Scheme : Apple Partition Map
Disk Identifier : disk1
Media Name : WD My Book 1110 Media
Media Type : Generic
Connection Bus : USB
USB Serial Number : 574341563543373737303130
Device Tree : IODeviceTree:/PCI0@0/EHC1@1D,7
Writable : Yes
Ejectable : Yes
Mac OS 9 Drivers Installed : No
Location : External
Total Capacity : 999.5 GB (999,501,594,624 Bytes)
S.M.A.R.T. Status : Not Supported
Disk Number : 1
Partition Number : 0
The drive currently has two partitions on it, one for my iMac time machine and one for my MacBook Pro time machine. When I try to repartition this drive in Disk Utility I get the error message:
"Partition failed with the error:
Could not modify partition map because filesystem verification failed"
I need to clear some storage space on my internal MacBook drive and I really don't want to spend money on another external disk when I have over 500 GB just sitting there empty. I would just write directly to the current partitions but OSX flips out when you try to save random files straight to a Time Machine formatted disk.This seems to be a common question and problem around these parts. I am certain one of the resident geniuses will soon be responding.
While we both wait for that to happen, you may find some helpful information by doing a search for "repartition hard drive" or "repartition time machine" in that little search box just to the right. There seems to be lots of good information there.
But remember that monkeying around with partitioning is not without risk, and we are talking about your Time Machine backups here.
Be careful and good luck.
Arch -
Hello,
Below I provide a complete code to re-produce the behavior I am observing. You could run it in tempdb or any other database, which is not important. The test query provided at the top of the script is pretty silly, but I have observed the same
performance degradation with about a dozen of various queries of different complexity, so this is just the simplest one I am using as an example here. Note that I also included approximate run times in the script comments (this is obviously based on what I
observed on my machine). Here are the steps with numbers corresponding to the numbers in the script:
1. Run script from #1 to #7. This will create the two test tables, populate them with records (40 mln. and 10 mln.) and build regular clustered indexes.
2. Run test query (at the top of the script). Here are the execution statistics:
Table 'Main'. Scan count 5, logical reads 151435, physical reads 0, read-ahead reads 4, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Txns'. Scan count 5, logical reads 74155, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 5514 ms,
elapsed time = 1389 ms.
3. Run script from #8 to #9. This will replace regular clustered indexes with columnstore clustered indexes.
4. Run test query (at the top of the script). Here are the execution statistics:
Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Main'. Scan count 4, logical reads 54850, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 828 ms,
elapsed time = 392 ms.
As you can see the query is clearly faster. Yay for columnstore indexes!.. But let's continue.
5. Run script from #10 to #12 (note that this might take some time to execute). This will move about 80% of the data in both tables to a different partition. You should be able to see the fact that the data has been moved when running Step #
11.
6. Run test query (at the top of the script). Here are the execution statistics:
Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Main'. Scan count 4, logical reads 54817, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 8172 ms,
elapsed time = 3119 ms.
And now look, the I/O stats look the same as before, but the performance is the slowest of all our tries!
I am not going to paste here execution plans or the detailed properties for each of the operators. They show up as expected -- column store index scan, parallel/partitioned = true, both estimated and actual number of rows is less than during the second
run (when all of the data resided on the same partition).
So the question is: why is it slower?
Thank you for any help!
Here is the code to re-produce this:
--==> Test Query - begin --<===
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
SET STATISTICS IO ON
SET STATISTICS TIME ON
SELECT COUNT(1)
FROM Txns AS z WITH(NOLOCK)
LEFT JOIN Main AS mmm WITH(NOLOCK) ON mmm.ColBatchID = 70 AND z.TxnID = mmm.TxnID AND mmm.RecordStatus = 1
WHERE z.RecordStatus = 1
--==> Test Query - end --<===
--===========================================================
--1. Clean-up
IF OBJECT_ID('Txns') IS NOT NULL DROP TABLE Txns
IF OBJECT_ID('Main') IS NOT NULL DROP TABLE Main
IF EXISTS (SELECT 1 FROM sys.partition_schemes WHERE name = 'PS_Scheme') DROP PARTITION SCHEME PS_Scheme
IF EXISTS (SELECT 1 FROM sys.partition_functions WHERE name = 'PF_Func') DROP PARTITION FUNCTION PF_Func
--2. Create partition funciton
CREATE PARTITION FUNCTION PF_Func(tinyint) AS RANGE LEFT FOR VALUES (1, 2, 3)
--3. Partition scheme
CREATE PARTITION SCHEME PS_Scheme AS PARTITION PF_Func ALL TO ([PRIMARY])
--4. Create Main table
CREATE TABLE dbo.Main(
SetID int NOT NULL,
SubSetID int NOT NULL,
TxnID int NOT NULL,
ColBatchID int NOT NULL,
ColMadeId int NOT NULL,
RecordStatus tinyint NOT NULL DEFAULT ((1))
) ON PS_Scheme(RecordStatus)
--5. Create Txns table
CREATE TABLE dbo.Txns(
TxnID int IDENTITY(1,1) NOT NULL,
GroupID int NULL,
SiteID int NULL,
Period datetime NULL,
Amount money NULL,
CreateDate datetime NULL,
Descr varchar(50) NULL,
RecordStatus tinyint NOT NULL DEFAULT ((1))
) ON PS_Scheme(RecordStatus)
--6. Populate data (credit to Jeff Moden: http://www.sqlservercentral.com/articles/Data+Generation/87901/)
-- 40 mln. rows - approx. 4 min
--6.1 Populate Main table
DECLARE @NumberOfRows INT = 40000000
INSERT INTO Main (
SetID,
SubSetID,
TxnID,
ColBatchID,
ColMadeID,
RecordStatus)
SELECT TOP (@NumberOfRows)
SetID = ABS(CHECKSUM(NEWID())) % 500 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
SubSetID = ABS(CHECKSUM(NEWID())) % 3 + 1,
TxnID = ABS(CHECKSUM(NEWID())) % 1000000 + 1,
ColBatchId = ABS(CHECKSUM(NEWID())) % 100 + 1,
ColMadeID = ABS(CHECKSUM(NEWID())) % 500000 + 1,
RecordStatus = 1
FROM sys.all_columns ac1
CROSS JOIN sys.all_columns ac2
--6.2 Populate Txns table
-- 10 mln. rows - approx. 1 min
SET @NumberOfRows = 10000000
INSERT INTO Txns (
GroupID,
SiteID,
Period,
Amount,
CreateDate,
Descr,
RecordStatus)
SELECT TOP (@NumberOfRows)
GroupID = ABS(CHECKSUM(NEWID())) % 5 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
SiteID = ABS(CHECKSUM(NEWID())) % 56 + 1,
Period = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'), -- DATEADD(dd,ABS(CHECKSUM(NEWID())) % @Days, @StartDate)
Amount = CAST(RAND(CHECKSUM(NEWID())) * 250000 + 1 AS MONEY),
CreateDate = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'),
Descr = REPLICATE(CHAR(65 + ABS(CHECKSUM(NEWID())) % 26), ABS(CHECKSUM(NEWID())) % 20),
RecordStatus = 1
FROM sys.all_columns ac1
CROSS JOIN sys.all_columns ac2
--7. Add PK's
-- 1 min
ALTER TABLE Txns ADD CONSTRAINT PK_Txns PRIMARY KEY CLUSTERED (RecordStatus ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
CREATE CLUSTERED INDEX CDX_Main ON Main(RecordStatus ASC, SetID ASC, SubSetId ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
--==> Run test Query --<===
--===========================================================
-- Replace regular indexes with clustered columnstore indexes
--===========================================================
--8. Drop existing indexes
ALTER TABLE Txns DROP CONSTRAINT PK_Txns
DROP INDEX Main.CDX_Main
--9. Create clustered columnstore indexes (on partition scheme!)
-- 1 min
CREATE CLUSTERED COLUMNSTORE INDEX PK_Txns ON Txns ON PS_Scheme(RecordStatus)
CREATE CLUSTERED COLUMNSTORE INDEX CDX_Main ON Main ON PS_Scheme(RecordStatus)
--==> Run test Query --<===
--===========================================================
-- Move about 80% the data into a different partition
--===========================================================
--10. Update "RecordStatus", so that data is moved to a different partition
-- 14 min (32002557 row(s) affected)
UPDATE Main
SET RecordStatus = 2
WHERE TxnID < 800000 -- range of values is from 1 to 1 mln.
-- 4.5 min (7999999 row(s) affected)
UPDATE Txns
SET RecordStatus = 2
WHERE TxnID < 8000000 -- range of values is from 1 to 10 mln.
--11. Check data distribution
SELECT
OBJECT_NAME(SI.object_id) AS PartitionedTable
, DS.name AS PartitionScheme
, SI.name AS IdxName
, SI.index_id
, SP.partition_number
, SP.rows
FROM sys.indexes AS SI WITH (NOLOCK)
JOIN sys.data_spaces AS DS WITH (NOLOCK)
ON DS.data_space_id = SI.data_space_id
JOIN sys.partitions AS SP WITH (NOLOCK)
ON SP.object_id = SI.object_id
AND SP.index_id = SI.index_id
WHERE DS.type = 'PS'
AND OBJECT_NAME(SI.object_id) IN ('Main', 'Txns')
ORDER BY 1, 2, 3, 4, 5;
PartitionedTable PartitionScheme IdxName index_id partition_number rows
Main PS_Scheme CDX_Main 1 1 7997443
Main PS_Scheme CDX_Main 1 2 32002557
Main PS_Scheme CDX_Main 1 3 0
Main PS_Scheme CDX_Main 1 4 0
Txns PS_Scheme PK_Txns 1 1 2000001
Txns PS_Scheme PK_Txns 1 2 7999999
Txns PS_Scheme PK_Txns 1 3 0
Txns PS_Scheme PK_Txns 1 4 0
--12. Update statistics
EXEC sys.sp_updatestats
--==> Run test Query --<===Hello Michael,
I just simulated the situation and got the same results as in your description. However, I did one more test - I rebuilt the two columnstore indexes after the update (and test run). I got the following details:
Table 'Txns'. Scan count 8, logical reads 12922, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Main'. Scan count 8, logical reads 57042, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 251 ms, elapsed time = 128 ms.
As an explanation of the behavior - because of the UPDATE statement in CCI is executed as a DELETE and INSERT operation, you had all original row groups of the index with almost all data deleted and almost the same amount of new row groups with new data
(coming from the update). I suppose scanning the deleted bitmap caused the additional slowness at your end or something related with that "fragmentation".
Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer
Maybe you are looking for
-
I did the latest update for iTunes and now it wont open. It says my version is corrupted. Is there a way to fix it without re-installing?
-
To provide all the info, I have created this project on my Desktop with Mercury engine and I'm now trying to open it on my laptop on which I don't have a powerful graphic card - ad therefore no Mercury Engine. Could that be the problem? Thanks.
-
Problem in Submit Statement of Alv Report
Hi All Experts, i make one report,my requirement is when i execute my report then it displays 100 records, i select 20 records from 100 records then based upon 20 records execute the another report with out slection screen. i know the process throug
-
Contacts are not available when sharing a photo stream
I've updated my iPhone to iOS 6 and have noticed when I'm trying to share a photo stream, the bulk of my contacts are greyed out and not selectable. More notably, a lot of my contacts that have iOS 6 are greyed out. What's the criteria for it being e
-
Anyone notice that Firefox downloads and load a page a lot slower then Dolphin HD?
Anyone notice that Firefox downloads and load a page a lot slower then Dolphin HD? That has been my experience all morning. I'm on Verizon Droid 2. I been testing various web pages using Dolphin & Firefox Mobile, and I have noticed that in general, w