SQL protection - best practises
I need some help to understand the difference between Express full backup vs. syncronization for SQL protection. I have always thought that the SQL logs were truncated during a Express full backup, but have read several articles that claim the opposite.
What is best practise in order to protect SQL DB's in general? And which recovery models should be used (full/simple)?
/Amir
Please read below
Backing up SQL with DPM
Why DPM 2010 and SQL are Better Together?
SQL Logs not getting truncated ?
(comments)
SCDPM: Backup SQL and Truncate SQL Logs
Have a nice day !!!
DPM 2012 R2: Remove Recovery Points
Similar Messages
-
Sql query writing best practises
HI forum,
Any body is having a tutorial on sql query writing best practises...pls share with me
ThanksFor example:
[url http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/toc.htm]Oracle Database Performance Tuning Guide 10g Release 2 (10.2)
[url http://people.aapt.net.au/roxsco/tuning/]Oracle SQL Tuning Guide
Gints Plivna
http://www.gplivna.eu -
Request for howto - error processing best practise
Hi JDev Team. Something I would like to see in a future HOWTO would be error handling in a BC4J/JSP application. What is best practise? How do we make sure that when a database error occurs, we can trap the error and provide a friendly error message, or failing that, at least ensure the standard error is usable by a maintenance programmer. For eg. the following error occurs if a referential constraint restricts the delete:
javax.servlet.jsp.JspException: JBO-26041: Failed to post data to database during "Delete": SQL Statement " DELETE FROM TECHTRANSFER.TTSITES Sites WHERE SITEID=:1".
in fact the same error message is displayed for almost any database error - the programmer can't fix the problem when he has no idea what it is!! (same with update and insert)
I wasn't going to request this until I had read all of the help available on error processing but the way this project is going I won't get time. If you think that it is adequately covered in the help, then fine, just let me know where.
Thanks,
SimonYou can enclose your bc4j/jsp code with a try / catch expression. That way if a failure occurs, you can trap it, display a friendy error, and do whatever you want with the exception.
What I have been doing for develpment purposes, is send via email a modified errorpage.jsp. Here is what gets emailed to me (*'s in potentially sensitive data) and displayed to the screen (I'm eventually going to replace all the displayed garbage with something friendly):
An error occured in application PDC User Administration
User Session Properties:
Sesion ID: *********
App ID: *********
User Name: *********
User ID: *********
Priv Role: *********
Password: *********
Org No: *********
First Name: skunitzer
Last Name: ANALYST
App Title : PDC User Administration
Current Url: insertNewUser.jsp
Specific error is javax.servlet.jsp.JspException: JBO-25013: Too many objects match the primary key oracle.jbo.Key[1423 ].
Parameters:
LastName
Kunitzer
EmailAddress
[email protected]
FirstName
SteveLiveTest
OrgNo
PhoneWorkNo
I have no phone #
ExpireDate
2001-04-26
ExpireDateString
jRQiIsFGANIbrGlihGTl[epofZmSNgEkGqbHN@iErHNPRi
UserID
UserPrivs
Exception:
javax.servlet.jsp.JspException: JBO-25013: Too many objects match the primary key oracle.jbo.Key[1423 ].
Message:
JBO-25013: Too many objects match the primary key oracle.jbo.Key[1423 ].
Localized Message:
JBO-25013: Too many objects match the primary key oracle.jbo.Key[1423 ].
Stack Trace:
javax.servlet.jsp.JspException: JBO-25013: Too many objects match the primary key oracle.jbo.Key[1423 ].
at java.lang.Throwable.fillInStackTrace(Native Method)
at java.lang.Throwable.fillInStackTrace(Compiled Code)
at java.lang.Throwable.<init>(Compiled Code)
at java.lang.Exception.<init>(Compiled Code)
...Stack Trace goes on but I won't bother with it anymore...
While not always as specific as I would like, I have not had too much trouble hunting down the errors.
null -
Best practise for SAP users who leave the company
Hi
Could anyone reccommend a best practise document or give advice on how to deal with SAP user ID's when employee's/contractors/consultants leave? I am the basis admin just starting an SAP implementation and we have no dedicated authorisation team at the moment, so I have been asked to look into this :
Currently we set the validity date in SU01 to the termination date.
We chack there are no background jobs scheduled under that user id, if there are, we change the job owner to a valid user (we try to run all background jobs under an admin account).
We do not delete the user as from an audit point of view I believe it restricts information you can report on and there are implications on change documents etc, so best to lock it with validity dates.
Can anyone advise further?
We are running SAP ECC 5.0 on Windows 2003 64 Bit/MS SQL 2000.
Thanks for any help.Hi,
Different people will tell you different versions of what they believe is best practice, but in my opinion you are already doing reasonably well.
What I prefer is
1. Lock ID & set validity date.
2. Assign user to user group LEAVER or EXPIRED or something similar (helps with reporting) out of SUIM/S_BCE* reports.
3. Delete role assignment (should you need it, the role assignment will be in the change history docs anyway).
4. Check background jobs & act accordingly.
For ease of getting info I prefer not to delete the ID though plenty of people do. -
SAP Business One 2007 - SQL Security best practice
I have a client with a large user base running SAP Business One 2007.
We are concerned over the use of the sql sa user and the ability to change the password of this ID from the logon of SAP Business One.
We therefore want to move to use Windows Authentication (ie Trusted Connection) from the SAP BO logon. It appears however that this can only work by granting the window IDs (of the SAP users) sysadmin access in SQL.
Does anyone have a better method of securing SAP Business One or is there a recommended best practice. Any help would be appreciated.
DamianSee Administrators Guide for best practise.
U can use SQL Authentication mode Don't tick Remember password.
Also check this thread
SQL Authentication Mode
Edited by: Jeyakanthan A on Aug 28, 2009 3:57 PM -
Best practise to detect changes between two tables
Hi,
I try to write a query, that shows me the differences between a table in my DWH and the table in the source system. It should show me new, deleted and updated rows.
My approach is to do a full outer join based on the key and then check if any of the columns changed (source.A!=DWH.A or Source.B!=DWH.B, etc.) to get the updated rows.
My problem is now that my table has millions of rows und more than 100 columns (number, nvarchar, etc.). So the query takes hours.
Is there any best practise solution to optimize that query, by rewriting it, setting indexes or using hash code? I played around with hash code, but it wasn't really faster.
(BTW: CDC, etc are not allowed)
Thanks for any ideas!890408 wrote:
So i guess I can't use the merge statement, as it is just for SCD1.
Yes you can:
create table products(
name varchar2(20),
price number,
effective_from date,
effective_to date,
active number
insert
into products
values(
'Samuel Adams, 6-pack',
6.99,
null,
sysdate - 51,
0
insert
into products
values(
'Samuel Adams, 6-pack',
7.29,
sysdate - 50,
null,
1
create table product_updates(
name varchar2(20),
price number
insert
into product_updates
values(
'Samuel Adams, 6-pack',
7.49
insert
into product_updates
values(
'Corona, 6-pack',
6.49
select *
from products
NAME PRICE EFFECTIVE EFFECTIVE ACTIVE
Samuel Adams, 6-pack 6.99 13-OCT-11 0
Samuel Adams, 6-pack 7.29 14-OCT-11 1
select *
from product_updates
NAME PRICE
Samuel Adams, 6-pack 7.49
Corona, 6-pack 6.49
merge
into products p
using (
select name,
price,
'update' flag
from product_updates
union all
select chr(0) || name name,
price,
'insert' flag
from product_updates
) u
on (
p.name = u.name
when matched
then update
set effective_to = sysdate,
active = 0
where active = 1
when not matched
then insert
values(
substr(u.name,2),
u.price,
sysdate,
null,
1
where flag = 'insert'
3 rows merged.
select *
from products
NAME PRICE EFFECTIVE EFFECTIVE ACTIVE
Samuel Adams, 6-pack 6.99 13-OCT-11 0
Samuel Adams, 6-pack 7.29 14-OCT-11 03-DEC-11 0
Samuel Adams, 6-pack 7.49 03-DEC-11 1
Corona, 6-pack 6.49 03-DEC-11 1
SQL> SY.
SY. -
Best Practises for Email Addresses?
Hi Guys,
Are there any best practise guides / documents / etc. for configuring user's E-mail addresses? We have a large turnaround of users and obviously sometimes they have the same name as previous/current employees (we
do not delete any old accounts / mailboxes.) My question is whether or not it is OK to use numbers in an email address (i.e. [email protected])?
Thanks
StephenHi,
It's OK to use numbers in an email address.
The format of email addresses is local-part@domain where the local-part may be up to 64 characters long and the domain name may have a maximum of 253 characters.
The local-part of the email address may use any of these ASCII characters RFC 5322
Uppercase and lowercase English letters (a–z, A–Z) (ASCII: 65-90, 97-122)
Digits 0 to 9 (ASCII: 48-57)
Characters !#$%&'*+-/=?^_`{|}~ (ASCII: 33, 35-39, 42, 43, 45, 47, 61, 63, 94-96, 123-126)
Character . (dot, period, full stop) (ASCII: 46) provided that it is not the first or last character, and provided also that it does not appear two or more times consecutively (e.g. John..[email protected] is not allowed.).
Special characters are allowed with restrictions. They are:
Space and "(),:;<>@[\] (ASCII: 32, 34, 40, 41, 44, 58, 59, 60, 62, 64, 91-93)
The restrictions for special characters are that they must only be used when contained between quotation marks, and that 3 of them (The space, backslash \ and quotation mark " (ASCII: 32, 92, 34)) must also
be preceded by a backslash \ (e.g. "\ \\\"").
For more information, please refer to this similar thread.
https://social.technet.microsoft.com/Forums/exchange/en-US/69f393aa-d555-4f8f-bb16-c636a129fc25/what-are-valid-and-invalid-email-address-characters
Best Regards. -
Best Practises for doing Master Scheduling using SNP
Hello Gurus ,
Can you please suggest the best practises for doing Master Scheduling using SNP . Which engine to use , what would that mean etc
Regards,
NickAPC Back-UPS XS 1300. $169.99 at Best Buy.
Our power outages here are usually only a few seconds; this should give my server about 20 or 25 minutes run-time.
I'm setting up the PowerChute software now to shut down the computer when 5 minutes of power is left. The load with the monitor sleeping is 171 watts.
This has surge protection and other nice features as well.
-Noel -
Hi ,
Is it best practise to do all filed validation using java...or could field validation be done using java beans in JSP ?
Thanks
Vivek2. There's nothing wrong with what you have there,
this is a custom tag library.Yes, for small applications, that would suffice. Remember that you are going away from an MVC model while using jstl sql tags.
Consider having a dao layer if your app is medium/big or destined to grow with time.
You simply cannot maintain all the complexities in your application (and by extrapolation, your queries) - read
1. http://www.onjava.com/pub/a/onjava/2002/03/13/jsp.html?page=2
2. http://today.java.net/pub/a/today/2003/11/27/jstl2.html
There's some good points for and against here
1. http://weblogs.java.net/blog/johnm/archive/2003/11/the_community_i.html
There's some support here
1. http://acroyear.blog-city.com/a_reasonable_reason_for_the_sql_tags.htm
2. file:///C:/Documents%20and%20Settings/ramprasadmr/Local%20Settings/Temporary%20Internet%20Files/Content.IE5/WL2JCD6Z/300,42,SQL tags: the debate
cheers,
ram. -
For Exadata x2-2 is there a best practises document to enable SMART scans for all the application code on exadata x2-2?
We cover more in our book, but here are the key points:
1) Smarts scans require a full segment scan to happen (full table scan, fast full index scan or fast full bitmap index scan)
2) Additionally, smart scans require a direct path read to happen (reads directly to PGA, bypassing buffer cache) - this is automatically done for all parallel scans (unless parallel_degree_policy has been changed to AUTO). For serial sessions the decision to do a serial direct path read depends on the segment size, smalltable_threshold parameter value (which is derived from buffer cache size) and how many blocks of a segment are already cached. If you want to force the use of a serial direct path read for your serial sessions, then you can set serialdirect_read = always.
3) Thanks to the above requirements, smart scans are not used for index range scans, index unique scans and any single row/single block lookups. So if migrating an old DW/reporting application to Exadata, then you probably want to get rid of all the old hints and hacks in there, as you don't care about indexes for DW/reporting that much anymore (in some cases not at all). Note that OLTP databases still absolutely require indexes as usual - smart scans are for large bulk processing ops (reporting, analytics etc, not OLTP style single/a few row lookups).
Ideal execution plan for taking advantage of smart scans for reporting would be:
1) accessing only required partitions thanks to partition pruning (partitioning key column choices must come from how the application code will query the data)
2) full scan the partitions (which allows smart scans to kick in)
2.1) no index range scans (single block reads!) and ...
3) joins all the data with hash joins, propagating results up the plan tree to next hash join etc
3.1) This allows bloom filter predicate pushdown to cell to pre-filter rows fetched from probe row-source in hash join.
So, simple stuff really - and many of your every-day-optimizer problems just disappear when there's no trouble deciding whether to do a full scan vs a nested loop with some index. Of course this was a broad generalization, your mileage may vary.
Even though DWs and reporting apps benefit greatly from smart scans and some well-partitioned databases don't need any indexes at all for reporting workloads, the design advice does not change for OLTP at all. It's just RAC with faster single block reads thanks to flash cache. All your OLTP workloads, ERP databases etc still need all their indexes as before Exadata (with the exception of any special indexes which were created for speeding up only some reports, which can take better advantage of smart scans now).
Note that there are many DW databases out there which are not used just only for brute force reporting and analytics, but also for frequent single row lookups (golden trade warehouses being one example or other reference data). So these would likely still need the indexes to support fast single (a few) row lookups. So it all comes from the nature of your workload, how many rows you're fetching and how frequently you'll be doing it.
And note that the smart scans only make data access faster, not sorts, joins, PL/SQL functions coded into select column list or where clause or application loops doing single-row processing ... These still work like usual (with exception to the bloom filter pushdown optimizations for hash-join) ... Of course when moving to Exadata from your old E25k you'll see speedup as the Xeons with their large caches are just fast :-)
Tanel Poder
Blog - http://blog.tanelpoder.com
Book - http://apress.com/book/view/9781430233923 -
Hi all,
I just read the cluster installation guide, an i was wondering if any one would help me with the best practise for it.
I will have the instalation of CS,Oracle distributed document capture, Inbound refinery, and a database.
I was thinking the instalation like this:
1 server: Database
2 server: CS (with components), webserver, Document capture
3 server: CS (with components), webserver
4 server: Inbound refinery
5 server: Storage
Is this Ok, or can i do it with the less servers. This will be envirement only for document management and workflows, no web content for now.
Please provide me informations for best practise, share your cluster installations.The best thing to do is dependent on what you want out of your application. Using a distributed destination can help you if you want to ensure that producers will get load balanced among several currently running destinations. It can also help the availability of consumers (as long as you set your forwarding delay parameters properly - by default forwarding messages from one physical queue to the other does not happen). In neither case when using distributed destinations will your client side artifacts be automatically reconnected (however, some level of automatic reconection is coming in 9.0.1). Furthermore, any persistent messages on any physical destinations will not suddenly be available anywhere else until a crashed machine has been brought back up.
That is where migratable targets come in. If your application wants to fail over using some sort of redundant hardware to back up the disks (e.g. dual-ported scsi drives) then migratable targets make more sense. You can script the migration of a JMSServer complete with its store of persistent messages from a failed machine to another machine. This increases the fail-over capability of your persistent messages without using a distributed queue (but you would not get the producer/consumer load-balancing you get with DDs).
So this is not an exact science. Some applications need the load-balancing, others need the persistent migration capability. Most need both. There is nothing stoping you from using physical destinations in your distributed destination that are targeted to migratable JMSServers. You would then have the ability to migrate the persistent state of the destination, and also have load-balancing of the producers and consumers.
Hope that helps...
John Wells (Aziz)
[email protected]
John Wells (Aziz)
[email protected] -
Any best practise to archive PO's which does not have corresponding invoice
Hello,
As part of initial implementation and conversion, We have a lot of PO's / LTA created but their corresponding invoices were never converted into SAP from legacy system. SAP archiving program tags those as not business complete as the invoice qty does not match with po qty (there are no invoices to start with). Just flagging 'delivery complete and final confirmation' of PO does not help. Anybody ran into similar situation and how did they resolve it? I am reluctant to enhance standard SAP archiving program to bypass those checks and that is my only last option. Any SAP recommended Note / best practise etc would help.
Satyajit DebWhere is the invoice posted?
was the invoice posted in the legacy system?
Clearance of GR/IR account with MR11 will usually close such POs. -
My question is same while granting user or role in the application, what is the best practise? How to decide the level of applying role to pagedef's, xml files, or some other file that i have missed out.
As for my concern I would go for page definition files.
-
Best practise in SAP BW master data management and transport
Hi sap bw gurus,
I like to know what is the best practise in sap bw master data transport. For example, if I updated my attributes in development, what are the 'required only' bw objects should I transport?
Appreciate advice.
Thank you,
EricHi Vishnu,
Thanks for the reply but that answer may be suitable if I'm implementing a new BW system. What I'm looking for is more on daily operational maintenance and transport (a BW systems that has gone live awhile).
Regards,
Eric -
What is the best practise to provide a text file for a Java class in a OSGi bundle in CQ?
This is probably a very basic question so please bear with me.
What is the best way to provide a .txt file to be read by a Java class in a OSGi bundle in CQ 5.5?
I have been able to read a file called "test.txt" that I put in a structure like this /src/resources/<any-sub-folder>/test.txt from my java class at /src/main/java/com/test/mytest/Test.java using the bundle's getResource and getEntry calls but I was not able to use the context.getDataFile. How is this getDataFile method call to be used?
And what if I want to read the file located in another bundle, is it possible? or can I add the file to some repository and then access it - but I am not clear how to do this.
And I would also like to know what is the best practise if I need to provide a large data set in a flat file to be read by a Java class in CQ5.
Please provide detailed steps or point me to a how to guide or other helpful resources as I am a novice.
Thank you in advance for your time and help.
VSAs you can read in the OSGi Core specification (section 4.5.2), the getDataFile() method is to read/write a file in the bundle's private persistent area. It cannot be used to read files contained in the bundle. The issue Sham mentions refers to a version of Felix which is not used in CQ.
The methods you mentioned (getResource and getEntry) are appropriate for reading files contained in a bundle.
Reading a file from the repository is done using the JCR API. You can see a blueprint for how to do this by looking at the readFile method in http://svn.apache.org/repos/asf/jackrabbit/tags/2.4.0/jackrabbit-jcr-commons/src/main/java /org/apache/jackrabbit/commons/JcrUtils.java. Unfortunately, this method is not currently usable as it was declared incorrectly (should be a static method, but is an instance method).
Regards,
Justin
Maybe you are looking for
-
Passing a value from Report A to B when B is in a different Subject Area
I had posted a question previously on how to pass the Dept # from Report A to B when B i in a different Subject Area. The question was: Report A is a table on the Dashboard that shows Actual vs Budget results by Department. I want the user to click o
-
hi in case of import under custom bonded ware house plant i do 1) PO in bondedwarehoue plant 2)GR in same plant then after STPO i m transfering to original plant by paying duty. But before placing order i dont know whether material will go in custom
-
I scanned some photos into my Iphoto file on my Mac desktop and I'm trying to export them into Shutterfly and they are not in .JPG or .JPEG and will not export. How do I convert them?
-
Is there an Image Ready feature in Photoshop CS4?
Is ImageReady no longer a part of the tool panel?
-
Movie Thumbnails when syncing??
When I connect my iPod and try to select movies to sync, I have the option of syncing all/none/some movies. I have a lot of movies so I can't sync everything. When I try to sync only those that I select, iTunes creates thumbnails for EVERY movie/TV e