Reduce database header?
Hi all, I'm making a new layout but my header has got out of hand, it won't resize smaller. I get the double-arrow cursor when I mouse over the boundary between body and header but it'll only move in one direction, down the page - what's the story? Many thanks for any help, Duncan
Hello
I assumes that your header contains a forgotten object.
I was able to reproduce the described behaviour with a text block containing several returns.
Nothing is visible but the header limit can't be moved to the top.
Try to select the entire content of the header so the culprit will appear.
Yvan KOENIG (from FRANCE mardi 8 mai 2007 17:15:58)
Similar Messages
-
How reduce envelope header?
How do you reduce the header margin for envelopes in Pages 5.5 on iOS 10.10?
Cody,
We would have to know exactly which template you are using. Last time I looked, there were differences in template design. I don't recall any of them using Headers though.
Jerry -
UCCE 10.0 EDMT will reduce database size ?
Hi
We are doing Tech Ref. upgrading CCE 8.0.3 server to CCE 10.0,
Intial HDS DB size is 145GB
after patch upgrade to 8.5, the HDS DB size is 125 GB,
but after EDMT to ver 10.0 the migrated data is 79GB
does something happened in migration towards the new version discarding some data
In EDMT logs everything successfull
In the 8.5 HDS Tables i can see the count is 366 items and for View folder is 443 Items
In the 10.0 HDS tables i can see the count is 343 items and for View folder is 456 Items
Why we see this difference, is it normal, how we got this much reduction in size
Also, there is difference in HDS size between AW A and B too
Thanks
HariSivajiHello Kanan,
I have installed a Solution Manager 7.1 SP03 system, and I installed using Oracle not max DB.
Right now the system is pretty pristine. The Database allocation is 146GB and of that allocate space I have used about 100GB
My data base size is 146, I do not have a shortage of disk space, and the DB will autoextend should I fill it up.
So my question would be of the 136GB what percentage is used? When you say it is growing, is it extending, or is it just
using more of the allocated space?
I am just now looking at a colleagues Solman 7.1 SP03 system. They have 176GB allocated to the database, but they have 104GB used. Their system has had a fair bit of use, but it is not a production system either, so it doesn't have a real world load on it.
So typically reducing the size of the database, would entail unload the database to a flat file, resizing the database, and then reloading the database. Typically this si a DBA function.
So I am not sure at what point a MaxDB databse would extend if you mean the physical size of the database is getting bigger. Still 136GB wouldn't be considered an unreasonable allocation as my system has 146Gb allocsated to the DB and my Colleague has
176 GB allocated. ButI only used 100GB and they have used many scenarios and has maybe 20 users and only has used 4 GB more than me. So what is the size of the used space verus the allocation?
You can remove various service sessions using report RDSMOPREDUCEDATA, but on a training system, I am not sure how much space that would free up. You can run a DB02 to see the allocation versus what is used, if you are not sure if 136Gb is the allocation or what space has been used. !f 136GB is the allocation, its actually low, and I suspect it is, since MaxDB tends to have a smaller foot print than Oracle. But if it is the space used and not the total allocation, then it seems very high for a training system..
Hope this helps some.
Regards,
Paul -
is there a way to reduce the horizontal length of the track header more?- I know dragging to the left reduces it a little
you can't hide 'm.
...because you couldn't select tracks anymore if they were.
Christian -
BW System copy from PRD to DEV - Reduce database size
Hi All
We are in the process of copying our existing Production BW system (3.5) to become a new Development & test system. We want to reduce the database size significantly from 1.4 Tb to about 150 - 200 Gb. We have deleted all cube and ODS data as well as much of the master data but still have much tablespace tied up in monitor entries (eg table RSMONMESS) and IDOCs. All of the notes I read say that we cannot delete from RSMONMESS in a "Prodcution" environment, however we are creating a new DEV system and really need to get rid of monitor entries anyway. So there are two questions:
1. Any idea how we can clear up table RSMONMESS or delete unwanted monitor entries ?
2. How can we delete IDOCs copied from our production system without archiving them ? (They are of no use in our new test system.)
Please help.
Many thanks in anticipation.
Paul Sullivan
Orica ITHi Paul,
See SAP Note 694895 - Performance and tables RSMON, RSDONE
Note Language: English Version: 5 Validity: Valid Since 29.11.2005
Summary
Symptom
Tables RSMON* (for example, table RSMONMESS) and tables RS*DONE (for
example, table RSSELDONE) continuously increase in size in the BW system
with each request that is created.
Currently there is no option to reduce these tables.
Do not delete any entries from these tables. If you did, this would have
the following consequences:
When you carry out the next check for the requests for which you have
deleted records from these tables, the check will not run properly.
The status of the relevant requests turns RED in the monitor, and also in
all affected data targets that contain the request.
The affected request and all subsequent requests are no longer visible in
the reporting - all queries on the affected data targets then only display
old data.
Numerous dumps will occur in various situations.
You will not be able to repair the errors caused by the deletion.
More Terms
RSMONMESS; RSSELDONE; Performance; RSMON; RSDONE;
Cause and Prerequisites
This is caused by a program error.
Solution
In the next BW release (BI in SAP NetWeaver 2004s - that is BW 7.0), you
will be able to archive entries from the RSMON* and RS*DONE tables using a
request archiving process that archives the administrative information for
requests.
After archiving, these tables are then considerably reduced in size.
Until then, there is unfortunately no option for reducing the tables.
Rgds,
Colum -
I have a reporting module. In that reporting module, I have to take data from different tables. Each field of the report get's it's data from its table. e.g. if in the report i have two fields 1. EmployeeId 2. CompanyId then I want to show employee name and company name in my report. For this i have to do a database hit to get the employee name from the Employee table (for the given employeeId) and then to get the company name i have to hit the DB again (to company table). Is there any way to reduce such Database hits as this has reduced the speed of my reports like anything. If there are 10000 rows then i have to do DB hit for these two columns 10000 times (that too at runtime). Thats bad right? What should i do to make my application faster?
I have a reporting module. In that reporting module,
I have to take data from different tables. Each field
of the report get's it's data from its table. e.g. if
in the report i have two fields 1. EmployeeId 2.
CompanyId then I want to show employee name and
company name in my report. For this i have to do a
database hit to get the employee name from the
Employee table (for the given employeeId) and then to
get the company name i have to hit the DB again (to
company table). Why can't you do one query and bring the entire data set for the report in one shot?
Is there any way to reduce such
Database hits as this has reduced the speed of my
reports like anything. If there are 10000 rows then i
have to do DB hit for these two columns 10000 times
(that too at runtime). Thats bad right? That's very bad, as you already know.
What should i do to make my application faster?Learn how to write SQL properly. One query should be able to bring the entire data set over in one network roundtrip.
% -
Reducing Database Call Techniques...query caching the only way?
What's the most efficient way to reuse data that gets called on practically every page of an application?
For instance, I have a module that handles all my gateways, sub pages and subgateways etc etc. This will only change whenever a change is made to the page structure in the admin portion of the application. It's really not necessary to hit the database everytime a page loads. Is this a good instance to use query caching? What are the pros, cons and alternatives? I thought an alternative might be to store in a session, but that doesn't sound too ideal.
Thanks!
PaulWhat's the most efficient way to reuse data that gets called on practically every page of an application?
That sounds like a question from the certification exam. The answer is to store the data in session or applicaton scope, depending on the circumstances. If the data depends on the user, then the answer is session. If the data persists from user to user, then it is application.
admin portion of the application.
Suggests users must log in. Otherwise you cannot distinguish admin from non-admin.
This will only change whenever a change is made to the page structure in the admin portion of the application.
Then I would go for storing the data in application scope, as the admin determines the value for everybody else. However, the session acope also has something to do with it. Since the changes are only going to occur in the admin portion, I would base everything on a variable, session.role.
You cache the query by storing it directly in application scope within onApplicationStart in Application.cfc, like this:
<cfquery name="application.myQueryName">
</cfquery>
The best place for the following code is within onSessionStart in Application.cfc.
<!--- It is assumed here that login has already occurred. Your code checks whether
session.role is Admin. If so, make the changes. --->
<cfif session.role is 'admin'>
<!--- Make changes to the data in application.myQueryName, otherwise do nothing --->
</cfif>
Added edit: On second thought, the best place for setting the application variable is in onApplicationStart. -
Pls Reduce Header Size
Hi,
Would please provide more information about your question so we can resolve it more efficiently?
What does the “Header” refer to?
If you mean the email message header, I think we can do nothing to reduce the header size.
Please feel free to post back with more information.
Best Regards,
Steve Fan
TechNet Community Support
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
OIM 10g database reducing space (~120+GB)
Looking for pointers for reducing database size OTN/Oracle Documentation/Metalink etc ...
Environment
OIM: 9.1.0.2
Database: Oracle 10g Standard Edition
Database machine: Unix serverThis will help
Could Not execute auto Check for DISPLAY
Check if the DISPLAY variable is set - Failed -
hi experts,
i need standard LDB's related to the tables of sd, mm and fi... can anyone help regarding this.....Ok so here you are, If you want to edit one go to SE36
LDB name
Logical database short text
50V
Delivery in process
AAV
Logical Database RV: Sales Documents
ABCLAIMLDB
Agency Business: Complaints Processing
ABS
ABAP Book: Customer and bookings
ACAC_ACE_LDBDS
Accrual Object Distribution Server Reporting Table
ACAC_ACE_LDBPS
Accrual Engine Posting Server Reporting Database
ACE_FILA_LDBDS
Accrual Object Distribution Server Reporting Table
ACE_FILA_LDBPS
Accrual Engine Posting Server Reporting Database
ACE_SOP_LDBDS
Stock Option Accounting Distribution Server LDB
ACE_SOP_LDBPS
Provisions for Awards: Posting Server LDB
ACEDS_003
Accrual Object Distribution Server Reporting Table
ACEPS_003
Accrual Engine Posting Server Reporting Database
ADA
Assets Database
AFI
Logical database for orders
AGENCYLDB
Agency Business: Logical Database
AKV
Logical Database RV: Sales Documents
ALV
Archiving Deliveries
ARV
Logical Database RV: Sales Documents
ASV
Request Screen for Summary Information
AUK
Settlement documents
AUW
Allocation Table
B1L
Transfer requirements by number
BAF
BAV-Data collector
BAM
Purchase Requisitions (General)
BANK
Logical Database for Table BNKA
BBM
Archiving of Purchase Requisitions
BC405_DIFF_NODES
Example of Different Node Types
BJF
Loans flow records with date restriction(YR
BKK
Base Planning Object
BKM
Purchase Requisitions per Account Assignment
BMM
Documents for Number
BPF
Treasury Business Partner
BRF
Document Database
BRM
Accounting Documents
BTF
Loan portfolios and flows
BTM
Process Order; Print
BUCHUNGSJOURNAL
LDB for Posting Journal
BUD
LDB For Loans Master Data, Conditions, Documents
C1F
Cash Budget Management
CCLDB_AENR
ECH: Change number with status information
CDC
Document structure
CEC
Equipment BOM
CEK
Cost Centers - Line Items
CFK
Data pool for SAP EIS
CIK
Cost Centers - Actual Data
CKA
Costing
CKC
Sales order BOM
CKM
Material master
CKQ
Material Selection for New Costing Solution
CKS
MiniApp. for the Calculator: Sales Order Data
CKS_WAO
MiniApp: Sales Order Items to be Processed
CKW
Costing run: Material Selection
CMC
Material BOM
CPK
Cost Centers - Plan Data
CRC
Work Centers
CRK
Cost Centers - Total
CRZ
Logical database for courses BC220/BC230
CSC
Standard BOM
CSR
Logical database for archiving BOMs
CTC
Functional location BOM
DBM
MRP Documents
DDF
CUSTOMER DATABASE
DPM
Planned Orders
DSF
Loan Debit Position
DVS
Logical database for archiving DMS data
DWF
Loan resubmission
EBM
Purchasing Activities per Requirement Tracking No.
ECM
Purchasing Documents per Material Class
EHS_OH001
Logical Database for Occupational Health
EKM
Purchasing Documents per Account Assignment
ELM
Purchasing Documents per Vendor
EMM
Purchasing Documents for Material
ENM
Purchasing Documents per Document Number
EQI
Logical Database (Equipment)
ERM
Archiving of Purchasing Documents
ESM
Purchasing Documents per Collective Number
EWM
Purchasing Documents per Supplying Plant
F1S
BC: Planned flights, flights and bookings
FDF
Cash management and forecast
FDK
IS-U/FERC: Drill down to line items and paths
FEF
Cash Management - Memo Records
FILA
Lease Accounting
FMF
Funds Management
FPMF
LDB, reads FPAYH and FPAYP
FRF
Drill-down Selection Screen
FSF
Cash Management Totals Records
FTI_BW_CFM_VALUES
Market Values and Simulated Values in Pos. Mgmt
FTI_LO_PERIODS
Loan/CML Period Evaluations
FTI_LO_POSITIONS
Loan /CML Positions
FTI_SWAP_POSITION
Swap Positions
FTI_TR_CASH_FLOWS
Treasury Payment Information
FTI_TR_PERIODS
Treasury: Period-Based Evaluations
FTI_TR_PL_CF
Treasury: Revenue and Cash Flow Reporting
FTI_TR_POSITIONS
Treasury Positions
FTLM_DB01
Limit Management
FUK
IS-U/FERC: Drill back from document line items
G1S
text
GLG
FI-SL Totals and Line Items
GLU3
Flexible G/L
I1L
Inventory data for storage bin
I2L
Warehouse quants for storage bin
I3L
Inventory documents
IBF
Real Estate Logical Database (Lease-Out)
IDF
Real Estate Logical Database
IDFPLUS
Real Estate Plus Logical Database
IFM
Purchasing Info Records: General
ILM
Archiving Purchasing Info Records
IMA
Logical database for investment programs
IMC
IM Summarization (not usable operationally)
IMM
Inventory documents for material
IMR
Approp. requests (not operationally functional)
IMT
Approp. requests (not operationally functional)
INM
Inventory documents
IOC
Shop floor control - order info system
IPM_ACE_LDBDS
Accrual Object Distribution Server Reporting Table
IPM_ACE_LDBPS
Accrual Engine Posting Server Reporting Database
IRM
Reorganization of inventory documents
J5F
Logical Database for new Nota Fiscal Database
K1V
Generating Conditions
KDF
Vendor Database
KIV
Customer Material Information
KKF
Balance Audit Trail of Open Items
KLF
Historical Balance Audit Trail
KMV
SD Documents for Credit Limit
KOV
Selection of Condition Records
L1L
Evaluation Whse Documents
L1M
Stock movements for material
LMM
Stock Movements for Material
LNM
Stock movements
LO_CHANGE_MNMT
Logical database for engineering change management
MAF
Dataset for Dunning Notices
MDF
Logical Database for Master Data Selection
MEPOLDB
Logical Database/Selection of Purch. Order Tables
MIV
BC: Planned flights, flights and bookings
MMIMRKPFRESB
Selection from Reservations
MRM
Reorganization of material documents
MSM
Material master
NOTIF
LDB for Basic Notifications
NOTIFICATIONS
NTI
Logical database object networking
ODC
Shop floor control - orders per MRP controller
ODK
Orders
OFC
Shop floor control - orders per prod.scheduler
OHC
Shop floor control - orders by numbers
OPC
Shop floor control - orders by material
PAK
CO-PA Segment Level and Line Items
PAP
Applicant master data
PCH
Personnel Planning
PGQ
QM: Specs and Results of the Quality Inspection
PMI
Structure database (plant maintenance)
PNI
PM Planning Database
PNM
Planning database
PNM_OLD
Planning Database
PNP
HR Master Data
PNPCE
HR Master Data (Incl. Concurrent Employment)
POH
Production orders database - header
PSJ
Project system
PTRVP
Travel Management
PYF
Database for Payment Medium Print Programs
QAM
Inspection Catalogs: Selected Sets
QAQ
Inspection Catalogs: Selected Sets
QCM
Inspection Catalogs: Codes
QCQ
Inspection Catalogs: Codes
QMI
Logical database (PM notifications)
QMQ
Inspection Characteristics
QNQ
Quality Notifications
QTQ
Logical database for inspection methods
QUERYTESTLDB
Test LDB for InfoSet Query
R0L
Archive selection: Transfer orders (MM-WM)
R1L
Archive selection: Transfer requirements (MM-WM)
R2L
Archive Selection: Posting Change Notices (MM-WM)
R3L
Archive selection: Inventory documents (MM-WM)
R4L
Archive selection: Inventory histories (MM-WM)
RBL
Archiving of transfer requests
REAO
Real Estate: Logical Database for Architecture
REBD
Logical Database for Real Estate Objects
REBP
Logical Database via Partner (Real Estate)
RECN
Real Estate: Selection by Contracts
RECONTRACT
RE Logical Database: (General) Contract
RHL
Archiving of inventory history
RIL
Archiving of inventory documents
RKM
Reservations for Account Assignment
RLI
Logical Database Reference Location
RMM
Reservations for material
RNM
Reservations
RTL
Archiving of transfer orders
RUL
Archiving of Posting Change Notices
S1L
Stock by storage bins
S1L_OLD
Stock by Storage Bins
S2L
Warehouse quant for material
S3L
Stocks
SAK
Completely Reversed Allocation Documents
SD_KUSTA
Logical Database for Sales Summary
SD_ORDER
Logical database for inquiries, contracts
SD_SALES_DOCUMENT
Logical database for inquiries, contracts
SDF
G/L Account Database
SMI
Serial Number Management
T1L
Transfer orders by number
T1L_OLD
Transfer Orders by Number
T2L
Transfer orders for material
T3L
Transfer orders for storage type
T4L
Transfer order for TO printing
T5L
Transfer orders for reference number
TAF
Treasury
TIF
Treasury Information System
TPI
Functional Location Logical Database
U1S
User master reorganization: Password changes
U2S
User master reorganization: Password changes
U3S
User master reorganization: Password changes
U4S
User master reorganization: Password changes
UKM_BUPA
SAP Credit Management: Business Partner
V12L
Pricing Report
VAV
Logical Database RV: Sales Documents
VC1
List of Sales Activities
VC2
Generate Address List
VDF
Customer Database with View of Document Index
VFV
Logical Database RV: Billing Documents
VLV
Logical Database For Deliveries
VVAV
Logical Database RV: Sales Documents
VXV
SD: Billing Document - Export
WAF
Securities position plus additional master data
WOI
Maintenance Item
WPI
Maintenance plans
WTF
Securities positions and flows
WTY
WTY LD
WUF
Sec.-Determ.master data for positions
I hope this will delh -
Power outage Mailbox Database won't mount
Hello,
We have experienced a power outage, when we brought the server back the Mailbox Database wouldn't mount.
After that i ran Eseutil /mh and was on dirty shutdown, the i did /r and finally /p
Database is now on Clean shutdown state but still doesn't mount, error in event viewer:
failed. Error: The database action failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=-344) ---> Microsoft.Exchange.Data.Storage.AmOperationFailedException: An Active Manager operation failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=-344) ---> Microsoft.Mapi.MapiExceptionCallFailed: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=-344)
First time dealing with this problem, have i missed something?I'm still unable to mount the Database, i've moved the logs out of the path and tried to mount with the same error.
the output of eseutil /mh
PS E:\Mailbox Database moved> eseutil /mh '.\Mailbox Database 1773415643.edb'
Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
Version 14.00
Copyright (C) Microsoft Corporation. All Rights Reserved.
Initiating FILE DUMP mode...
Database: .\Mailbox Database 1773415643.edb
DATABASE HEADER:
Checksum Information:
Expected Checksum: 0x0122201a
Actual Checksum: 0x0122201a
Fields:
File Type: Database
Checksum: 0x122201a
Format ulMagic: 0x89abcdef
Engine ulMagic: 0x89abcdef
Format ulVersion: 0x620,17
Engine ulVersion: 0x620,17
Created ulVersion: 0x620,17
DB Signature: Create time:11/10/2012 19:14:18 Rand:1372720 Computer:
cbDbPage: 32768
dbtime: 279606 (0x44436)
State: Clean Shutdown
Log Required: 0-0 (0x0-0x0)
Log Committed: 0-0 (0x0-0x0)
Log Recovering: 0 (0x0)
GenMax Creation: 00/00/1900 00:00:00
Shadowed: Yes
Last Objid: 1
Scrub Dbtime: 0 (0x0)
Scrub Date: 00/00/1900 00:00:00
Repair Count: 2
Repair Date: 11/10/2012 19:14:18
Old Repair Count: 0
Last Consistent: (0x2,8,4D) 11/10/2012 20:51:35
Last Attach: (0x1,9,6C) 11/10/2012 20:51:35
Last Detach: (0x2,8,4D) 11/10/2012 20:51:35
Dbid: 1
Log Signature: Create time:11/10/2012 20:51:35 Rand:7225210 Computer:
OS Version: (6.0.6002 SP 2 NLS 500100.50100)
Previous Full Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Previous Incremental Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Previous Copy Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Previous Differential Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Current Full Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Current Shadow copy backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
cpgUpgrade55Format: 0
cpgUpgradeFreePages: 0
cpgUpgradeSpaceMapPages: 0
ECC Fix Success Count: found (15)
Last ECC Fix Success Date: 11/10/2012 19:30:10
Old ECC Fix Success Count: found (11)
ECC Fix Error Count: none
Old ECC Fix Error Count: none
Bad Checksum Error Count: found (620)
Last Bad Checksum Error Date: 11/10/2012 19:32:24
Old bad Checksum Error Count: found (518)
Last checksum finish Date: 00/00/1900 00:00:00
Current checksum start Date: 00/00/1900 00:00:00
Current checksum page: 0
Operation completed successfully in 2.277 seconds. -
LMS 4.2.3 Database Backup Issue
Hi All,
I was unable to take database backup of the LMS. Its failing with the below error.
Error :Backup Failed: Error(404) Insufficient space on destination
Where on destination drive i have space of 150GB Free space. but still LMS throwing the error.
Backup is failing. Any suggestion how much the size of the database. how can we check?/
Regards,
ChannaHi VInod/Martin,
The backup is failing
Backup to 'E:/LMSDBBKP_Backups/Today's_new_backup' started at: [Fri Oct 17 18:00:00 2014]
[Fri Oct 17 18:02:10 2014] ERROR(405): Insufficient disk space on backup destination volume.
Available Space is 188090140Kb and required space for backup is 226175301Kb.
[Fri Oct 17 18:02:10 2014] Backup failed: 2014/10/17 18:02:20
The DB file size very huge.i wanted to reduce database file size by performance data purge.but now am unable to purge performance data from the purge settings.. its giving an error "Cannot connect to JRM, Check whether JRM is up and running".
i can see the jrm is up and running normally.
is there any way i can purge the performance data from the backend ??
Regards,
Channa -
I have one scenario on our live production DB.This problem seems to be like performance perspective.I want your advice and your optimal solution to overcome from this issue.
1Likewise we have our live production system in which we have did some year end activity(purging of old data(delete old data) and this activity did it from the GUI based.We have not to delete all the records from the database but some of record which has been not at all in used.We are willing to go with Truncate command but unfortunately in tools does not provide with truncate command syntax
2After this activity we comes to know database table and as well Indexes are become more fragmented.
3It has also reduce database response time also now days some queries are taking more then 5 min for execution time
4More ever i have checked that all most indexes as well as tables are got fragmented almost.We did some what workaround for this and i.e. rebuild the indexes but still we are facing the same problem because deletion activity happens by the end of month or two month
5for This live production database we don't have the downtime.
Any valuable suggestion will more appreciateHi!
I'm sorry to tell you this but you're not in the right forum!
You must go to the DB forum!
Warm regards!
Max.
P.S.
Have you already run the statistics? -
Hi,
I am creating a new database, aflin, by using the following steps given in oracle documentation, but its giving error.
1) set oracle_sid=aflin
2) SQLPLUS /nolog
CONNECT sys/password AS sysdba
When I am giving this connect command it is giving "insufficient privileges or invalid username/password".
I changed the init<sid>.ora and init.ora to reflect this new database, aflin.
Most probably it is not taking the new database name, aflin.
How to set this oracle_sid, so that i can create this new database, aflin.
Also, when I am not giving the command in step-1, oracle connects to some other database, when i check through (select instance_name from v$instance).
Can you'll tell me know how to go about creating this new database.
Thanks in advance,
MAKYes, a full export apparently also creates schema/tablespaces for you.
Use "ignore=y" if it should be able to overcome existing tablespaces/users.
Well... Think I did a full=y exp/imp once, but simply dont recall much about it (oracle7->8)
Just did a quick check to find any wise words from support, and there were not any - except that they seem to think "full=y" is good for defragmenting an entire database according to this plan :
Reducing Database Fragmentation
You can reduce fragmentation by performing a full database export and import as follows:
- Do a full database export (FULL=y) to back up the entire database.
- Shut down the Oracle database server after all users are logged off.
- Delete the database.
- Re-create the database using the CREATE DATABASE statement.
- Do a full database import (FULL=y) to restore the entire database.
Not much info, I know :)
Do you have space/time to just try and redo the task if it fails miserably ?
(and post what you experience :) )
- but I would hope it works "out of the box" - I mean, they invented "full=y", so they better make it work too, and be able to handle duplicates and stuff :D
Or, you might consider doing the full exp , and then pick from it what you need ?
(But then you loose things only in full export (triggers / "public synonyms" and ? )
/Ryberg -
How to mount database copy without specific datafiles
Hello all,
I need to make a database copy without specific datafiles. This is due to, in the copy, I just need some, not all, datafiles.
I tried the following command:
startup mount
alter tablespace mydata offlineBut it appears that the database must be open.
Anybody have performed some similar?
There is a document where I can read about it?
What else I need to know in order to start up this, reduced database copy?
Thank you in advance.Hello All,
Thanks for your answers.
Yes I have already copied some filesystems to another server. All the data I need are on that filesystems. I haven´t tried to open the database. So, in order to put it online, and taking in consideration all your suggestions I will perform on this sequence:
startup mount
alter database datafile mydatafile1 offline
alter database datafile mydatafile2 offline
alter database datafile myindexdatafile1 offline
alter database datafile myindexdatafile2 offline
alter database openWith this steps, the database should bring online. Correct?
There is another step, that Im missing?
What about the listener?
Will be enough to change the port and server name?
Thanks again.
Maybe you are looking for
-
Music stored on external hard disc not found after re-boot
I moved all my music onto an external hard disc becuase it was taking up too much space on my Mac. Now every time I reboot I get an error message saying music not found, please find. ie. I have to point itunes to all the folders on my hard drive agai
-
How do I turn off pop up alerts on other IOS devices when I purchase an app?
Whenever I download and/or purchase an app on my iphone, I get a notice on my ipad that I have purchashed an app on the iphone and asking if I want to download it to the ipad. I don't have automatic downloading enabled because I don't necesarily want
-
Attachment transfer from SH (SRM) to PReq (R/3) failed in one instance
Dear Expers, in SRM 7.0 Classic, I have come accross the following problem with the transfer of the attachments from the SRM Shopping Cart to the R/3 PREq: This functionality is normally working absolutely fine but now there was one instance observed
-
Alternative scrollpane to load dynamic content
I'm trying to load jpgs to my scrollpane dynamically. I noticed the compiled scrollpane component in flash 8 is very large in size (> 130kb) which i cannot afford to use. Instead I'm using the flash 6 scrollpane which is not precompiled and only seve
-
Hi Experts We are using BI 7.0 version, before we had 3.5, which we upgraded. Now for some reason I can see 0customer in old version 3.5, but not in 7.0. There is no avtive table - KNVV in BI. I have avtivated the DS in ecc and replicated in BI, but