Database Archive in maxL
Hello
Can anybody tell me the syntax to archive the essbase database. The below bold line is giving me syntax error ?.
login admin password on localhost;
alter database App.database begin archive to file "/hyperion/AnalyticServices/app/sample/Basic/Archive_15062012.txt";
<b>alter database App.database archive to file "/hyperion/samplebasic.arc"</b>
alter database App.database end archive;
logout;
exit;
Many Thanks
Hi KosuruS..
Thanks for the inputs.
I have tried the below script as suggested. But still getting error while executing 'begin archive command".
spool on to '/hyperion/hyp1mtn/Oracle/Middleware/user_projects/epmsystem1/EssbaseServer/essbaseserver1/bin/backup.log';
login admin password on localhost;
alter application Sample load database Basic;
alter system logout session on application Sample;
alter system kill request on application Sample;
<b>alter database Sample.Basic begin archive;</b>
export database sample.basic level0 data to data_file "/hyperion/hyp1mtn/Oracle/Middleware/user_projects/epmsystem1/EssbaseServer/essbaseserver1/bin/Export.txt";
alter database Sample.Basic end archive;
alter application Sample enable commands;
logout;
exit;
<b>Below is the log file content:</b>
MAXL> login admin password on localhost;
OK/INFO - 1051034 - Logging in user [admin@Native Directory].
OK/INFO - 1241001 - Logged in to Essbase.
MAXL> alter application Sample load database Basic;
OK/INFO - 1056013 - Application Sample altered.
MAXL> alter system logout session on application Sample;
OK/INFO - 1056092 - Sessions logged out [0].
OK/INFO - 1056090 - System altered.
MAXL> alter system kill request on application Sample;
OK/INFO - 1056090 - System altered.
MAXL> alter database Sample.Basic begin archive;
ERROR - 1242020 - (1) Syntax error near end of statement.
MAXL> export database sample.basic level0 data to data_file "/hyperion/hyp1mtn/Oracle/Middleware/user_projects/epmsystem1/EssbaseServer/essbaseserver1/bin/Export.txt";
OK/INFO - 1019020 - Writing Free Space Information For Database [Basic].
OK/INFO - 1005029 - Parallel export enabled: Number of export threads [1].
OK/INFO - 1005031 - Parallel export completed for this export thread. Blocks Exported: [177]. Elapsed time: [0.127]..
OK/INFO - 1005002 - Ascii Backup Completed. Total blocks: [177]. Elapsed time: [0.132]..
OK/INFO - 1013270 - Database export completed ['sample'.'basic'].
MAXL> alter database Sample.Basic end archive;
OK/INFO - 1056023 - Database Sample.Basic altered.
MAXL> alter application Sample enable commands;
OK/INFO - 1056013 - Application Sample altered.
MAXL> logout;
User admin is logged out
Similar Messages
-
How to install Oracle Content Database Archive Adapter for SAP on windows
Hi,
I would like to install CDAA(Oracle Content Database Archive Adapter) on 32 bit and 64 bit windows. Will somebody point me to where i can download this from (if its not free i would like to try out trial version first) . If it depends upon other oracle components what is that i need to install additionally . And if you could provide links to all the required components, it would be great help.
Also if there is any demo from where i can see what all features CDAA support and how does it compare to other products say IBM commonstore etc , please let me know.
Thanks a lot ..
-RohitCan any body help me to say thet "How to install Oracle Applications 11i(11.5.10.2) on Windows 2003 R2 X64 bit ' . Hi,
I believe, 11i installation on Windows 2003 X64 is not fully supported.
what is your os version:
if it is windows 2003 32 bit, then it is supported on X86-64 servers for both database and application tiers
If it is windows 2003 64 bit, then only datatabase tier is supported on x86-64 servers
Have a look at the following note
Frequently Asked Questions: Oracle E-Business Suite Support on x86-64 Doc ID: 343917.1
https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=343917.1 -
Standby database archive log apply in production server.
Dear All,
How I apply standby database archive log apply in production server.
Please help me.
Thanks,
ManasHow can i use standby database as primary for that 48 hour ?Perform a switchover (role transitions).
First check if the standby is in sync with the primary database.
Primary database:
sql>select max(sequence#) from v$archived_log; ---> Value AStandby database:
sql>select max(sequence#) from v$archived_log where applied='YES'; -----> Value BCheck if Value B is same as Value A.
If the standby is in sycn with the primary database, then perform the switchover operation (refer the below link)
http://www.articles.freemegazone.com/oracle-switchover-physical-standby-database.php
http://docs.oracle.com/cd/B19306_01/server.102/b14230/sofo.htm
http://www.oracle-base.com/articles/9i/DataGuard.php#DatabaseSwitchover
manas
Handle: manas
Email: [email protected]
Status Level: Newbie
Registered: Jan 24, 2012
Total Posts: 10
Total Questions: 3 (3 unresolved)
Name Manas
Location kolkata Mark your questions as answered if you feel that you have got better answers rather than building up a heap of unanswered questions. -
Hello all,
we are in upgrading BW 3.0B to BW 3.5 on MS Sql 2000. In the part of post upgrade activities, we need to check database archiving mode whether it is inactivation mode or disabled mode. How to check this? if it is disabled, how to activate database archive log mode?
Kindly help us.
kind regards,
Aruna,Hi
Right click on the SAP database in Enterprise Manager - it is called the <SID> of your BW system.
There are three modes availabling - Simple, Bulk or Full.
You should set the mode to Full.
Thanks
N.P.C -
PCUI: Remove "Search in database/Archive" dropdown list in Opportunities
Hi,
I'd like to remove the "Search in database/Archive" dropdown list in the extended search in the Opportunities application (Z)CRMD_BUS2000111. The customer is not using an archive and as such this is a waste of screen real estate in his opinion (rightfully so, I'd say).
Now, I haven't seen a reference to this dropdown list in the fieldgroup definition, but I have seen the "data sources for queries" (I guess that's what it is called in english) node with two entries in there, Archive and Database.
Should I just delete the entry for the archive from there? Is is not used by other applications, and would I break something else with this?
Thanks
ThomasHi Thomas,
Goto the transaction CRMC_BLUEPRINT.
In this transaction go to the following path.
Layout of People-Centric UI -> Application Element -> Search Group -> Search scenario
Inside this path select the entry OPP_GROUP and double-click on assign shuffler.
You would find two entries in the next screen. If you remove these entries the field would not appear in the Opportunity advanced search.
Reward points if this helped solve your problem!
Jash. -
A question about SAP database archiving
Hi Archiving experts,
I am working on SAP database archiving. Due to operation error, I run two times write program "S3VBAKWRS" for object SD_VBAK. so there are generate two archiving session in tcode SARA. so whether I can maked one archiving session as invalid?
Kelvin
RegardsSrikishan D wrote:
Hi,
>
> You would need to activate logon tickets for the ABAP system.
> Follow this link and set the profile parameters accordingly followed by a system restart.
>
> http://help.sap.com/saphelp_dimp50/helpdata/en/62/831640b7b6dd5fe10000000a155106/content.htm
>
> Regards,
> Srikishan
Hello there and happy new year !
Thanks for the link, according to the documentation in this link, I'm supposed to install
SAPSECULIB (I don't have enough rights in my MarketPlace account to download softwares).
However, as I understand (please correct me if I'm wrong) what I have installed on my PC
is not a complete SAP system. It is just a trial version with an ABAP server allowing to test
local codes and developing. Are you sure that SAPSECULIB goes as well with a trial version
of SAP? because I checked in the installation package, there were no SAPSECULIB to install so
I thought maybe it is not going to work with this version of SAP Netweaver (Yet, it is strange
because in the download page it is specifed that we may create web dynpro with this package)
Thanks in advance, -
Can i change the database connections through MAXL Scripts?
Hi,
just want to know whether i can change the database connections through MAXL Scripts. i am using essbase 9.3.1.Hi John,
I have built my rulefile by connecting to a database & now i want to change the database connections. I know i can change the database connections through frontend. ( File--> open file). I want to know whether i can change the database connections by writing any MAXL Scripts. -
How often should a database archive its logs
Oracle 9i, Windows 2003
Our production database archives logs every 2mins and retaining those logs is becoming a problem, I was wondering if its healthy for a database to archive log evry 2mins and if not what do i need to do.
thanks in advanceI did see the link you posted earlier and it doesn't say very much, does it? Apart from
(1) big logs don't affect LGWR performance
(2) small logs will cause extra checkpoints to take place apart from those you wanted to take place by setting FAST_START_MTTR_TARGET
Well, taken together, both those points mean big logs are good news and small ones aren't. Which is exactly what I said above!
Unfortunately, you've completely missed the actual point I was making. It had nothing to do with "recovery time being a concern" nor with 100M being 100M and taking the same amount of time to apply.
It was actually point (2) above. I want logs big enough not to cause rapid log switching. But I have bulk loads. Therefore, I have to have ENORMOUS logs to prevent rapid log switching during those times. In fact, on one database I am connected to right now, I have 2GB redo logs which nevertheless manage to switch every 8 minutes on a Friday night. You can imagine the frequency of log switches we had when those logs were originally created at 5MB each! And the number of redo allocation retries...
Personally, I'd like 8GB logs to get it down to a log switch every 30 minutes or so on a Friday night, but with multiple members and groups, that's just getting silly.
But now I have an enormous log that will take forever and a day to fill up and switch when I'm NOT doing bulk loads. Ordinarily, without a forced log switch, my 2GB log takes 3 days to fill up.
If I were to have a catastrophic hardware failure, I could lose my current redo log. FAST_START_MTTR_TARGET can't do anything to ameliorate that loss: flushing the dirty buffers to disk regularly doesn't protect my data, actually. In fact, there is no way to recover transactions that are sitting in the current redo log if that log is lost. Therefore, having an enormous log full of hours and hours (in my case, about 72 hours'-worth) of redo is a massive data loss risk, and not one I'm prepared to take.
Therefore, ARCHIVE_LAG_TARGET allows me to have huge logs to deal with (2) above, but not to have more than half an hour's data at risk from total loss.
I know why the parameter was invented. I can understand the word "target" in its name, after all. But the WAY it achieves its design goals is simply to force log switches. And forcing log switches is a good thing for everyone to be able to do, when appropriate, even if they're not using Data Guard and standby databases.
You are at liberty, of course, to keep on reserving your opinion, but in this case it's ill-informed and missing the point: loss of the current log is not nice and regular log switches means the effects of that unlikely event happening are completely predictable and (importantly) containable.
That's just one advantage, not "loads". But it's a huge one. -
Standby Database (Archive Log Mode)
I'm going to be setting up a standby database.
I understand that the primary database must be in archive log mode.
Is there any reason for the standby database to be in archivelog mode?Since your primary Db is in archive log mode, so will be your standby, when it is made primary.But. you can use STANDBY REDO LOGS from 9i version, where these Standby Redo Logs then store the information received from the Primary Database.
As per metalink:-
>
Standby Redo Logs are only supported for the Physical Standby Database in Oracle 9i and as well for Logical Standby Databases in 10g. Standby Redo Logs are only used if you have the LGWR activated for archival to the Remote Standby Database.If you have Standby Redo Logs, the RFS process will write into the Standby RedoLog as mentioned above and when a log switch occurs, the Archiver Process of the Standby Database will archive this Standby Redo Log to an Archived Redo Log, while the MRP process applies the information to the Standby Database. In a Failover situation, you will also have access to the information already written in the Standby Redo Logs, so the information will not be lost.
>
Check metalink Doc ID: Note:219344.1
Regards,
Anand -
Database query in MaxL DBS-Name list all file information failed
When I tried list all file information command in MaxL it gave me an error saying the user doesn't exist. When I check the user through display user; command in MaxL I get the information as listed below.
Is there something wrong with the way the user was created ?
How can I (Admin) get the index and data file information?
MAXL> query database Application.Database list all file information;
ERROR - 1051012 - User ADMIN@Native Directory does not exist.
MAXL> display user;
user description logged in password_reset_days enabled change_password type protocol
conn param application_access_
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-----------------
--+-------------------+-------------------
ADMIN@Nati TRUE 0 TRUE FALSE 3 CSS
native://DN=cn=911, 1Has anyone resolved the problems with using TNSFormat?
As is, I want to move to a shared server setup and to do that I want to use TNSFormat and point to a tns entry which is setup for IPC+Shared connection.
But the Oracle Home that has the Oracle HTTP Server (from the companion CD) does not have SQL*net installed and does not seem to understand TNS.
I have TNS_ADMIN setup, I have ORACLE_HOME_LISTENER poiting to the DB Home.
for the OHS home, using "sqlplus login/pw@ipcshared" works, but "tnsping ipcshared" does not, since tnsping does not exist in the OHS home.
I cannot install SQL*Net from the CD1, since it requires a dedicated/new home and does not want to install in the OHS Home.
The only format that works in a dedicated OHS Home setup is ServiceNameFormat.
Any help or input would be very helpful.
Regards
ps. This is a redhat linux setup.
Message was edited by:
Oli_t -
Standby database Archive log destination confusion
Hi All,
I need your help here..
This is the first time that this situation is arising. We had sync issues in the oracle 10g standby database prior to this archive log destination confusion.So we rebuilt the standby to overcome this sync issue. But ever since then the archive logs in the standby database are moving to two different locations.
The spfile entries are provided below:
*.log_archive_dest_1='LOCATION=/m99/oradata/MARDB/archive/'
*.standby_archive_dest='/m99/oradata/MARDB/standby'
Prior to rebuilding the standby databases the archive logs were moving to /m99/oradata/MARDB/archive/ location which is the correct location. But now the archive logs are moving to both /m99/oradata/MARDB/archive/ and /m99/oradata/MARDB/standby location, with the majority of them moving to /m99/oradata/MARDB/standby location. This is pretty unusual.
The archives in the production are moving to /m99/oradata/MARDB/archive/ location itself.
Could you kindly help me overcome this issue.
Regards,
DanHi Anurag,
Thank you for update.
Prior to rebuilding the standby database the standby_archive_dest was set as it is. No modifications were made to the archive destination locations.
The primary and standby databases are on different servers and dataguard is used to transfer the files.
I wanted to highlight one more point here, The archive locations are similar to the ones i mentioned for the other stndby databases. But the archive logs are moving only to /archive location and not to the /standby location. -
SGEN & database archive log mode
Hi Experts,
To apply ABAP support packs, I disabled archive log mode of databse and successfully applied support packs.
As post processing, I kicked off SGEN.
Is it required that database be in "NO Archive log mode" while SGEN is running or can I enable it.
Thanks
PutlaNot sure what database it is.. but if it is ORACLE...
$sqlplus / as sysdba
SQL> shutdown immediate;
SQL> startup mount;
SQL> Alter database noarchivelog;
SQL> alter database open;
After the completion of SGEN.....
$sqlplus / as sysdba
SQL> shutdown immediate;
SQL> alter database mount;
SQL> alter database archivelog;
SQL> alter database open; -
ESS Leave Request database - Archiving or Maintenance
Hi
I understand the leave request detals are stored in the following tables
PTREQ_HEADER (Request Header)
PTREQ_ITEMS (Request Items)
PTREQ_ACTOR (Request Participant)
PTREQ_NOTICE (Note for Request)
PTREQ_ATTABSDATA (Request Data for Attendances/Absences)
Over a period of time, the size must build up. IN an organization of 200,000 employees, there must be a need for maintaining the size of this database to manageable limits. I could not find any documentation for archiving or deleting the entries in this databse. there is a report rptarqdbdel, but according to documentation this is a trouble shooting tool and must be used for individual cases only and with extreme caution.
Can any one please advise as to whether there are any tools/best practices for maintaining the size of this databse periodically? As there are several tables involved one would like to avoid having to write a custom program for this.there is no standard archiving procedure for leave request tables, You need to however use pu22 etc
usually no deletion should be done as it might lead to inconsistencies.
So best option is to use the report Rptarqdbdel which will ensure consistency is maintained while deleting the records
you can delete the old records using this.
You need to be careful while purging the data from these tables -
Hi All,
Currently we are working on DATA Archiving. We have some tables where the volume
of DATA increasing rapidly. At present we have 55GB of DATA in two main tables and the
application performance is decaying day by day.
Since table partitioning failed to improve the performance we are planning to move
the DATA year wise to different tables in such a way that the application can still refer the
archived DATA.
Please provide us some input on how DATA archiving can be achieved.
Thanks & Regards
AvinashHi,
Yes. its takes more time for query execution. We are using reports
which run high cost DML queries and are slowing down the system
during peak hours.
To be more descriptive,
We have one main table TAGA_DTLS_SUBMISSIONS where 24 rows
being inserted for each form submission. On an average 35 thousand
submissions are being made per day. Size of the table as of now
is 55 GB. Since this table is frequently queried
by reports and other application modules performance is decaying.
The table TAGA_DTLS_SUBMISSIONS has no primary keys or DATE column.
Now the challenge is to archive the DATA in such a way that application
should be able to refer the old DATA.[either data is moved to different table
or moved to different schema]
Thanks & Regards
Avinash -
I have created a standby database in 10g using Data Guard. I ran a query to check that the logs are being applied and I get a perplexing result. It looks as if my logs are getting sent to the DB twice. Any ideas as to what may be causing this?
The result from the query is below.
Jim
SEQUENCE# FIRST_TIME NEXT_TIME APPLIED
3206 9/18/2006 9:38:33 AM 9/18/2006 9:53:35 AM NO
3206 9/18/2006 9:38:33 AM 9/18/2006 9:53:35 AM YES
3205 9/18/2006 9:23:28 AM 9/18/2006 9:38:33 AM NO
3205 9/18/2006 9:23:28 AM 9/18/2006 9:38:33 AM YES
3204 9/18/2006 9:08:23 AM 9/18/2006 9:23:28 AM NO
3204 9/18/2006 9:08:23 AM 9/18/2006 9:23:28 AM YES
3203 9/18/2006 8:53:18 AM 9/18/2006 9:08:23 AM NO
3203 9/18/2006 8:53:18 AM 9/18/2006 9:08:23 AM YES
3202 9/18/2006 8:38:13 AM 9/18/2006 8:53:18 AM NO
3202 9/18/2006 8:38:13 AM 9/18/2006 8:53:18 AM YES
3201 9/18/2006 8:23:08 AM 9/18/2006 8:38:13 AM NO
3201 9/18/2006 8:23:08 AM 9/18/2006 8:38:13 AM YES
3200 9/18/2006 8:08:03 AM 9/18/2006 8:23:08 AM NO
3200 9/18/2006 8:08:03 AM 9/18/2006 8:23:08 AM YESNevermind please. I was running the query on the Primary DB and I was seeing the createion and transmittal of the logs from the Primary to the Standby. My mistake.
Jim
Maybe you are looking for
-
I recently bought a Macbook Pro, this is my first Mac. I restarted it, but when it started back there was a plain white screen and then it prompted me to insert the reboot disk. Has anyone else had this problem and is there a way to do this manually
-
Problem with apache mod_python and django
Hi guys, I've correctly installed django from svn, and set up apache with mod_python. I created my infotreni project into /home/httpd/html/ , edited /etc/httpd/conf/httpd.conf adding this: <Location "/"> SetHandler python-program PythonHandler django
-
Earpiece not working_can't hear caller,
My earpiece isn't working. Cannot hear unless I switch over to speaker or plug in a headset. Any ideas on how to fix?
-
[svn:fx-trunk] 8561: Integrated TLF build 468.
Revision: 8561 Author: [email protected] Date: 2009-07-14 14:37:00 -0700 (Tue, 14 Jul 2009) Log Message: Integrated TLF build 468. This integration required trivial framework changes due to two renamed APIs: In TextContainerManager, scrollToPos
-
Why have my plug ins crashed after installing snow leopard ?
After installing snow leopards i now receive a messages saying my plug ins are disabled and shockwave crashed???