What are the best solutions for data warehouse configuration in 10gR2

I need help on solutions to be provided to my Client for upgrading the data warehouse.
Current Configuration: Oracle database 9.2.0.8. This database contains the data warehouse and one more data mart on the same host.Sizes are respectively 6 Terabyte(retention policy of 3 years+current year) and 1 Terabyte. The ETL tool and BO Reporting tools are also hosted on the same host. This current configuration is really performing poor.
Client cannot go for a major architectural or configuration changes to its existing environment now due to some constraints.
However, they have agreed to separate out the databases on separate hosts from the ETL tools and BO objects. Also we are planning to upgrade the database to 10gR2 to attain stability, better performance and overcome current headaches.
We cannot upgrade the database to 11g as the BO is at a version 6.5 which isn't compatible with Oracle 11g. And Client cannot afford to upgrade anything else other than the database.
So, my role is very vital in providing a perfect solution towards better performance and take a successful migration of Oracle Database from one host to another (similar platform and OS) in addition to upgrade.
I have till now thought of the following:
Move the Oracle database and data mart to separate host.
The host will be the same platform, that is, HP Superdome with HP-UX 32-bit OS (we cannot change to 64-bit as ETL tool doesn't support)
Install new Oracle database 10g on the new host and move the data to it.
Exploring all new features of 10gR2 to help data warehouse, that is, SQL MODEL Clause introduction, Parallel processing, Partitioning, Data Pump, SPA to study pre and post migrations.
Also thinking of RAC to provide more better solution as our main motive is to show a tremendous performance enhancement.
I need all your help to prepare a good road map for my assignment. Please suggest.
Thanks,
Tapan

SGA=27.5 GB and PGA=50 MB
Also I am pasting part of STATSPACK Report, eliminating the snaps of DB bounce. Please suggest the scope of improvement in this case.
STATSPACK report for
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 582946 11-Mar-13 20:02:16 46 12.8
End Snap: 583036 12-Mar-13 18:24:24 60 118.9
Elapsed: 1,342.13 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 21,296M Std Block Size: 16K
Shared Pool Size: 6,144M Log Buffer: 16,384K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 1,343,739.01 139,883.39
Logical reads: 100,102.54 10,420.69
Block changes: 3,757.42 391.15
Physical reads: 6,670.84 694.44
Physical writes: 874.34 91.02
User calls: 1,986.04 206.75
Parses: 247.87 25.80
Hard parses: 5.82 0.61
Sorts: 1,566.76 163.10
Logons: 10.99 1.14
Executes: 1,309.79 136.35
Transactions: 9.61
% Blocks changed per Read: 3.75 Recursive Call %: 43.34
Rollback per transaction %: 3.49 Rows per Sort: 190.61
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.90 Redo NoWait %: 100.00
Buffer Hit %: 96.97 In-memory Sort %: 100.00
Library Hit %: 99.27 Soft Parse %: 97.65
Execute to Parse %: 81.08 Latch Hit %: 99.58
Parse CPU to Parse Elapsd %: 3.85 % Non-Parse CPU: 99.34
Shared Pool Statistics Begin End
Memory Usage %: 7.11 50.37
% SQL with executions>1: 62.31 46.46
% Memory for SQL w/exec>1: 26.75 13.47
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
CPU time 492,062 43.66
db file sequential read 157,418,414 343,549 30.49
library cache pin 92,339 66,759 5.92
PX qref latch 63,635 43,845 3.89
db file scattered read 2,506,806 41,677 3.70
Background Wait Events for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
log file sequential read 176,386 0 3,793 22 0.2
log file parallel write 2,685,833 0 1,813 1 3.5
db file parallel write 239,166 0 1,350 6 0.3
control file parallel write 33,432 0 79 2 0.0
LGWR wait for redo copy 478,120 536 75 0 0.6
rdbms ipc reply 10,027 0 47 5 0.0
control file sequential read 32,414 0 40 1 0.0
db file scattered read 4,101 0 30 7 0.0
db file sequential read 13,946 0 29 2 0.0
direct path read 203,694 0 14 0 0.3
log buffer space 363 0 13 37 0.0
latch free 3,766 0 9 2 0.0
direct path write 80,491 0 6 0 0.1
async disk IO 351,955 0 4 0 0.5
enqueue 28 0 1 21 0.0
buffer busy waits 1,281 0 1 0 0.0
log file single write 172 0 0 1 0.0
rdbms ipc message 10,563,204 251,286 992,837 94 13.7
pmon timer 34,751 34,736 78,600 2262 0.0
smon timer 7,462 113 76,463 10247 0.0
Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
Statistic Total per Second per Trans
CPU used by this session 49,206,154 611.0 63.6
CPU used when call started 49,435,735 613.9 63.9
CR blocks created 6,740,777 83.7 8.7
Cached Commit SCN referenced 423,253,503 5,256.0 547.2
Commit SCN cached 19,165 0.2 0.0
DBWR buffers scanned 48,276,489 599.5 62.4
DBWR checkpoint buffers written 6,959,752 86.4 9.0
DBWR checkpoints 454 0.0 0.0
DBWR free buffers found 44,817,183 556.5 57.9
DBWR lru scans 137,149 1.7 0.2
DBWR make free requests 162,528 2.0 0.2
DBWR revisited being-written buff 4,220 0.1 0.0
DBWR summed scan depth 48,276,489 599.5 62.4
DBWR transaction table writes 5,036 0.1 0.0
DBWR undo block writes 2,989,436 37.1 3.9
DDL statements parallelized 3,723 0.1 0.0
DFO trees parallelized 4,157 0.1 0.0
DML statements parallelized 3 0.0 0.0
OS Block input operations 29,850 0.4 0.0
OS Block output operations 1,591 0.0 0.0
OS Characters read/written 182,109,814,791 2,261,447.1 235,416.9
OS Integral unshared data size ################## 242,463,432.4 ############
OS Involuntary context switches 188,257,786 2,337.8 243.4
OS Maximum resident set size 43,518,730,619 540,417.4 56,257.5
OS Page reclaims 159,430,953 1,979.8 206.1
OS Signals received 5,260,938 65.3 6.8
OS Socket messages received 79,438,383 986.5 102.7
OS Socket messages sent 93,064,176 1,155.7 120.3
OS System time used 10,936,430 135.8 14.1
OS User time used 132,043,884 1,639.7 170.7
OS Voluntary context switches 746,207,739 9,266.4 964.6
PX local messages recv'd 55,120,663 684.5 71.3
PX local messages sent 55,120,817 684.5 71.3
Parallel operations downgraded 1 3 0.0 0.0
Parallel operations not downgrade 4,154 0.1 0.0
SQL*Net roundtrips to/from client 155,422,335 1,930.0 200.9
SQL*Net roundtrips to/from dblink 18 0.0 0.0
active txn count during cleanout 16,529,551 205.3 21.4
background checkpoints completed 43 0.0 0.0
background checkpoints started 43 0.0 0.0
background timeouts 280,202 3.5 0.4
branch node splits 4,428 0.1 0.0
buffer is not pinned count 6,382,440,322 79,257.4 8,250.7
buffer is pinned count 9,675,661,370 120,152.8 12,507.9
bytes received via SQL*Net from c 67,384,496,376 836,783.4 87,109.3
bytes received via SQL*Net from d 6,142 0.1 0.0
bytes sent via SQL*Net to client 50,240,643,657 623,890.4 64,947.1
bytes sent via SQL*Net to dblink 3,701 0.1 0.0
calls to get snapshot scn: kcmgss 145,385,064 1,805.4 187.9
calls to kcmgas 36,816,132 457.2 47.6
calls to kcmgcs 3,514,770 43.7 4.5
change write time 369,373 4.6 0.5
cleanout - number of ktugct calls 20,954,488 260.2 27.1
cleanouts and rollbacks - consist 6,357,174 78.9 8.2
cleanouts only - consistent read 10,078,802 125.2 13.0
cluster key scan block gets 69,403,565 861.9 89.7
Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
Statistic Total per Second per Trans
cluster key scans 41,311,211 513.0 53.4
commit cleanout failures: block l 413,776 5.1 0.5
commit cleanout failures: buffer 414 0.0 0.0
commit cleanout failures: callbac 41,194 0.5 0.1
commit cleanout failures: cannot 174,382 2.2 0.2
commit cleanouts 11,469,056 142.4 14.8
commit cleanouts successfully com 10,839,290 134.6 14.0
commit txn count during cleanout 17,155,424 213.0 22.2
consistent changes 145,418,277 1,805.8 188.0
consistent gets 8,043,252,188 99,881.4 10,397.7
consistent gets - examination 3,180,028,047 39,489.7 4,110.9
current blocks converted for CR 9 0.0 0.0
cursor authentications 14,926 0.2 0.0
data blocks consistent reads - un 143,706,500 1,784.6 185.8
db block changes 302,577,666 3,757.4 391.2
db block gets 336,562,217 4,179.4 435.1
deferred (CURRENT) block cleanout 2,912,793 36.2 3.8
dirty buffers inspected 627,174 7.8 0.8
enqueue conversions 1,296,337 16.1 1.7
enqueue releases 13,053,200 162.1 16.9
enqueue requests 13,239,092 164.4 17.1
enqueue timeouts 185,878 2.3 0.2
enqueue waits 114,120 1.4 0.2
exchange deadlocks 7,390 0.1 0.0
execute count 105,475,101 1,309.8 136.4
free buffer inspected 1,604,407 19.9 2.1
free buffer requested 258,126,047 3,205.4 333.7
hot buffers moved to head of LRU 22,793,576 283.1 29.5
immediate (CR) block cleanout app 16,436,010 204.1 21.3
immediate (CURRENT) block cleanou 2,860,013 35.5 3.7
index fast full scans (direct rea 12,375 0.2 0.0
index fast full scans (full) 3,733 0.1 0.0
index fast full scans (rowid rang 192,148 2.4 0.3
index fetch by key 1,321,024,486 16,404.5 1,707.7
index scans kdiixs1 406,165,684 5,043.8 525.1
leaf node 90-10 splits 50,373 0.6 0.1
leaf node splits 697,235 8.7 0.9
logons cumulative 884,756 11.0 1.1
messages received 3,276,719 40.7 4.2
messages sent 3,257,171 40.5 4.2
no buffer to keep pinned count 569 0.0 0.0
no work - consistent read gets 4,406,092,172 54,715.0 5,695.8
opened cursors cumulative 20,527,704 254.9 26.5
parse count (failures) 267,088 3.3 0.4
parse count (hard) 468,996 5.8 0.6
parse count (total) 19,960,548 247.9 25.8
parse time cpu 323,024 4.0 0.4
parse time elapsed 8,393,422 104.2 10.9
physical reads 537,189,332 6,670.8 694.4
physical reads direct 292,545,140 3,632.8 378.2
physical writes 70,409,002 874.3 91.0
physical writes direct 59,248,394 735.8 76.6
physical writes non checkpoint 69,103,391 858.1 89.3
pinned buffers inspected 11,893 0.2 0.0
prefetched blocks 95,892,161 1,190.8 124.0
prefetched blocks aged out before 1,495,883 18.6 1.9
Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
Statistic Total per Second per Trans
process last non-idle time ################## ############## ############
queries parallelized 417 0.0 0.0
recursive calls 122,323,299 1,519.0 158.1
recursive cpu usage 3,144,533 39.1 4.1
redo blocks written 180,881,558 2,246.2 233.8
redo buffer allocation retries 5,400 0.1 0.0
redo entries 164,728,513 2,045.6 213.0
redo log space requests 1,006 0.0 0.0
redo log space wait time 2,230 0.0 0.0
redo ordering marks 2,563 0.0 0.0
redo size 108,208,614,904 1,343,739.0 139,883.4
redo synch time 558,520 6.9 0.7
redo synch writes 2,343,824 29.1 3.0
redo wastage 1,126,585,600 13,990.0 1,456.4
redo write time 718,655 8.9 0.9
redo writer latching time 7,763 0.1 0.0
redo writes 2,685,833 33.4 3.5
rollback changes - undo records a 522,742 6.5 0.7
rollbacks only - consistent read 335,177 4.2 0.4
rows fetched via callback 1,100,990,382 13,672.1 1,423.3
session connect time ################## ############## ############
session cursor cache count 1,061 0.0 0.0
session cursor cache hits 1,687,796 21.0 2.2
session logical reads 8,061,057,193 100,102.5 10,420.7
session pga memory 1,573,228,913,832 19,536,421.0 2,033,743.8
session pga memory max 1,841,357,626,496 22,866,054.4 2,380,359.0
session uga memory 1,074,114,630,336 13,338,399.4 1,388,529.0
session uga memory max 386,645,043,296 4,801,374.0 499,823.6
shared hash latch upgrades - no w 410,360,146 5,095.9 530.5
sorts (disk) 2,657 0.0 0.0
sorts (memory) 126,165,625 1,566.7 163.1
sorts (rows) 24,048,783,304 298,638.8 31,088.3
summed dirty queue length 5,438,201 67.5 7.0
switch current to new buffer 1,302,798 16.2 1.7
table fetch by rowid 6,201,503,534 77,010.5 8,016.8
table fetch continued row 26,649,697 330.9 34.5
table scan blocks gotten 1,864,435,032 23,152.6 2,410.2
table scan rows gotten 43,639,997,280 541,923.3 56,414.3
table scans (cache partitions) 26,112 0.3 0.0
table scans (direct read) 246,243 3.1 0.3
table scans (long tables) 340,200 4.2 0.4
table scans (rowid ranges) 359,617 4.5 0.5
table scans (short tables) 9,111,559 113.2 11.8
transaction rollbacks 4,819 0.1 0.0
transaction tables consistent rea 824 0.0 0.0
transaction tables consistent rea 1,386,848 17.2 1.8
user calls 159,931,913 1,986.0 206.8
user commits 746,543 9.3 1.0
user rollbacks 27,020 0.3 0.0
write clones created in backgroun 7 0.0 0.0
write clones created in foregroun 4,350 0.1 0.0
Buffer Pool Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
-> Standard block size Pools D: default, K: keep, R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
Free Write Buffer
Number of Cache Buffer Physical Physical Buffer Complete Busy
P Buffers Hit % Gets Reads Writes Waits Waits Waits
D 774,144 95.6############ 233,869,082 10,089,734 0 0########
K 504,000 99.9############ 3,260,227 1,070,338 0 0 65,898
R 63,504 96.2 196,079,539 7,511,863 535 0 0 0
Buffer wait Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
-> ordered by wait time desc, waits desc
Tot Wait Avg
Class Waits Time (s) Time (ms)
data block 7,791,121 14,676 2
file header block 587 101 172
undo header 151,617 71 0
segment header 299,312 58 0
1st level bmb 45,235 7 0
bitmap index block 392 1 3
undo block 4,250 1 0
2nd level bmb 14 0 0
system undo header 2 0 0
3rd level bmb 1 0 0
Latch Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
Consistent RBA 2,686,230 0.0 0.2 0 0
FAL request queue 86 0.0 0 0
FAL subheap alocation 0 0 2 0.0
FIB s.o chain latch 1,089 0.0 0 0
FOB s.o list latch 4,589,986 0.5 0.0 2 0
NLS data objects 1 0.0 0 0
SQL memory manager worka 5,963 0.0 0 0
Token Manager 0 0 2 0.0
active checkpoint queue 719,439 0.3 0.1 0 1 0.0
alert log latch 184 0.0 0 2 0.0
archive control 4,365 0.0 0 0
archive process latch 1,808 0.6 0.6 0 0
begin backup scn array 3,387,572 0.0 0.0 0 0
cache buffer handles 1,577,222 0.2 0.0 0 0
cache buffers chains ############## 0.5 0.0 430 354,357,972 0.3
cache buffers lru chain 17,153,023 0.1 0.0 1 385,505,654 0.5
cas latch 538,804,153 0.3 0.0 7 0
channel handle pool latc 1,776,950 0.5 0.0 0 0
channel operations paren 2,901,371 0.3 0.0 0 0
checkpoint queue latch 99,329,722 0.0 0.0 0 11,153,369 0.1
child cursor hash table 3,927,427 0.0 0.0 0 0
commit callback allocati 8,739 0.0 0 0
dictionary lookup 7,980 0.0 0 0
dml lock allocation 6,767,990 0.1 0.0 0 0
dummy allocation 1,898,183 0.2 0.1 0 0
enqueue hash chains 27,741,348 0.1 0.1 4 0
enqueues 17,450,161 0.3 0.1 6 0
error message lists 132,828 2.6 0.2 1 0
event group latch 884,066 0.0 0.7 0 0
event range base latch 1 0.0 0 0
file number translation 34 38.2 0.9 0 0
global tx hash mapping 577,859 0.0 0 0
hash table column usage 4,062 0.0 0 8,757,234 0.0
hash table modification 16 0.0 0 2 0.0
i/o slave adaptor 0 0 2 0.0
job workq parent latch 4 100.0 0.3 0 494 8.7
job_queue_processes para 1,950 0.0 0 2 0.0
ksfv messages 0 0 4 0.0
ktm global data 8,219 0.0 0 0
lgwr LWN SCN 2,687,862 0.0 0.0 0 0
library cache 310,882,781 0.9 0.0 34 104,759 4.0
library cache load lock 30,369 0.0 0.3 0 0
library cache pin 153,821,358 0.1 0.0 2 0
library cache pin alloca 126,316,296 0.1 0.0 4 0
list of block allocation 2,730,808 0.3 0.0 0 0
loader state object free 566,036 0.1 0.0 0 0
longop free list parent 197,368 0.0 0 8,390 0.0
message pool operations 14,424 0.0 0.0 0 0
messages 25,931,764 0.1 0.0 1 0
mostly latch-free SCN 40,124,948 0.3 0.0 5 0
Latch Sleep breakdown for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
-> ordered by misses desc
Get Spin &
Latch Name Requests Misses Sleeps Sleeps 1->4
cache buffers chains ############## 74,770,083 1,062,119 73803903/884
159/71439/10
582/0
redo allocation 170,107,983 3,441,055 149,631 3292872/1467
48/1426/9/0
library cache 310,882,781 2,831,747 89,240 2754499/6780
6/7405/2037/
0
shared pool 158,471,190 1,755,922 55,268 1704342/4836
9/2826/385/0
cas latch 538,804,153 1,553,992 6,927 1547125/6808
/58/1/0
row cache objects 161,142,207 1,176,998 27,658 1154070/1952
0/2560/848/0
process queue reference 1,893,917,184 1,119,215 106,454 78758/4351/1
36/0/0
Library Cache Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
->"Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
BODY 3,137,721 0.0 3,137,722 0.0 0 0
CLUSTER 6,741 0.1 4,420 0.2 0 0
INDEX 353,708 0.8 361,065 1.2 0 0
SQL AREA 17,052,073 0.3 54,615,678 0.9 410,682 19,628
TABLE/PROCEDURE 3,521,884 0.2 12,922,737 0.1 619 0
TRIGGER 1,975,977 0.0 1,975,977 0.0 1 0
SGA Memory Summary for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
SGA regions Size in Bytes
Database Buffers 22,330,474,496
Fixed Size 779,288
Redo Buffers 17,051,648
Variable Size 7,180,648,448
sum 29,528,953,880

Similar Messages

  • What are the best practice for CQ5.5 configuration?

    Hello,
    What are the best practice for CQ5.5 configuration which handle for High availability.
    Last time I had a issues on server when I was uploaded 2 GB of DAM and then after that the server is not able to start and always getting error regarding Tar Persistance.
    So kindly request you to please let me know what are the best apache felix configuration.
    Thanks in advance...
    Regards,
    Satish

    Hi,
    A DAM upload, regardless of the size of the assets, never should result in TarPM problems, unless you run into an OOM, which left the repository in an unclean state. So if you regularly do DAM uploads of that size, you should check the Garbage Collection logs and probably adjust the heapsize if necessary. You might want to limit the number of concurrent running workflows to keep the memory consumption a bit lower.
    To your question: HA in a traditional sense you cannot achieve with a single box, even with optimized settings. In an author usecase you would need clustering.
    Jörg

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • What are the best practices for generating an EPS logo from InDesign?

    Our costomer is running into technical issues with the logo we sent them, which was exported from Indesign. Images were not embedded and fonts missing. I was able to embed the images and fonts. However, we DO NOT want them to be able to make any text changes. So after exporting an eps, I opened the file in Adobe Illustrator and made all the text outlines. I hope this works. But I just wanted to post the question on what are the best practices for doing this?
    The client needs the logo with transparent background, images emebdded and type in outlines. Also, they need some space around the text. When I exported the eps, the file is right up on the edge of the type.

    It sounds like you are pretty far from "best practice" with regard to logo design and delivery.
    These days, the very use of the EPS format should be considered bad practice, and some other terms in your post, (i.e., 'images,' 'missing fonts'), make it sound like there is not a seasoned logo designer involved.
    That said, you probably already got the advice you need to get out of the immediate jam. However, without proper logo design, you and the client will soon be facing other problems. You should be delivering a 100% vector graphic in single-color (black) and corporate-color(s) versions, with no live font data, that has been test-scaled to very small and very large sizes; ensuring it will work at postage-stamp size and on the side of a truck or building, with specific spot color(s) and proportions that will enable it to be offset printed, embroidered and screen-printed on apparel, and cut into signage materials and decals.

  • What are the best practices for exception handling in n-tier applications?

    What are the best practices for exception handling in n-tier applications?
    The application is a fat client based on MVVM pattern with .NET framework.

    What are the best practices for exception handling in n-tier applications?
    The application is a fat client based on MVVM pattern with
    .NET framework.
    That would be to catch all exceptions at a single point in the n-tier solution, log it and create user friendly messages displayed to the user. 

  • What is the best solution for migrating from Maverick to Yosemite?

    What is the best solution for migrating from Maverick to Yosemite? Anyone have suggestions?

    Back up all data. Update all third-party software to the latest version and remove any you don't need. Download the Yosemite installer from the App Store. Run it.

  • What are the best practices for the RCU's schemas

    Hi,
    I was wondering if there is some best practices about the RCU's schemas created with BIEE.
    I already have discoverer (and application server), so I have a metadata repository for the Application Server. I will upgrade Discoverer 10g to 11, so I will create new schema with RCU in my metada repository (MR) of the Application Server. I'm wondering if I can put the BIEE's RCU schemas in the same database.
    Basically,
    1. is there a standard for the PREFIX ?
    2. If I have multiple components of Fusion in the same Database, I will have multiples PREFIX_MDS schema ? Can they have the same PREFIX ? Or They all need to have a different prefix ?
    For exemple: DISCO_MDS and BIEE_MDS or I can have DEV_MDS and this schema is valid for both Discoverer and BIEE.
    Thank you !

    What are the best practices for exception handling in n-tier applications?
    The application is a fat client based on MVVM pattern with
    .NET framework.
    That would be to catch all exceptions at a single point in the n-tier solution, log it and create user friendly messages displayed to the user. 

  • What is the best solution for running windows based programs

    I have one program that I need windows for.  What is the best solution for doing this without painfully slowing down my mac.

    Depends on your needs.
    BootCamp requires reboot and uses all physical resources of the system.  Only OSX *or* Windows can run.  Better for gaming or memory intensive tasks.  BootCampAssistant must set up the boot sector to make it work.  Also BCA required to remove it.
    VM (virtual machine) like Parallels which "hosts" Windows while still running OSX.  You split RAM between the two environments.  VM is just another program so no special boot sector changes needed, but not good for graphics or RAM intensive programs.
    Other recommended alternates to Parallels (paid) are VM Fusion (paid) and VirtualBox (free download from Oracle.com).

  • I have problems in the initiation of the Encore process when opening presents the following error message : "Encore CS6 Cannot Run in Non-Royalty Serialized".... What is the best solution for this problem ?

    Help Me.
    What is the best solution for this problem ?

    Encore is activated when you activate Premiere Pro... so, as Stan asked, how did you install P-Pro?
    Ask for serial number http://forums.adobe.com/thread/1234635 has a FAQ link
    -and a fix for Encore http://forums.adobe.com/thread/1421765?tstart=0 in reply #7
    -plus more Encore http://helpx.adobe.com/encore/kb/cant-write-image-fie-larger1.html

  • What are the best practies for naming symbols (spaces OK,etc.)?

    What are the best practices for symbol naming for ease, flexibility, & to avoid "gatchas" down the road?
    Does it make any difference if one includes spaces in the names of symbols?  Are there any widely followed conventions for naming them like using camelCase or underscores?
    Thanks!

    Hi Gina D 3333,
    Best-practice file-naming conventions for PDF files are the same as for any other file. I found this article on Apple's website that contains some useful pointers to consider when you're naming files that will be used on multiple platforms: OS X: Cross-platform filename best practices and conventions
    I hope this helps!
    Best,
    Sara

  • What are the best settings for ripping cd's in the AAC format?

    I have researched and found out that the AAC format rules over the MP3 format as fas as sound quality goes with lossy formats. Example: AAC at 128 is better than MP3 at 160 as so forth. My questions are: What are the best settings for ripping cd's in the AAC format? Will I be able to burn cd mixes with the tracks I import into itunes after importing using the AAC format? Does the AAC format have VBR like the MP3 format? Is there other ripping software such as dbPoweramp and Exact Audio Copy that does a better job ripping cd's in the AAC format? Please help!

    mysteryroach wrote:
    My questions are: What are the best settings for ripping cd's in the AAC format?
    It is a matter of opinion, but I suggest 256 kb/s, CBR, and leave all other settings on automatic.
    Will I be able to burn cd mixes with the tracks I import into itunes after importing using the AAC format?
    Yes. Just put them in a playlist and burn.
    Does the AAC format have VBR like the MP3 format?
    Yes. You will see it as an option in the Import Settings.
    Is there other ripping software such as dbPoweramp and Exact Audio Copy that does a better job ripping cd's in the AAC format?
    EAC does a better job of correcting errors in damaged CDs, but for normal CDs iTunes does a fine job of ripping to AAC.

  • What is the best solution for me to run Microsoft Access on my brand new iMAC?  Assume I'm a casual user.

    What is the best solution for me to run Microsoft Access on my brand new iMAC?  Assume I'm a casual user.
    I am lead to believe by some real smart guys on the Apple site that If I have the Apple Store partition my iMAC and add the full suite of Office products on that partition, I can run the few Access programs I have and need to run.
    Comments encouraged.  Thank you in advance for your consideration and help.

    You would have to install Windows, then install Microsoft Office Professional for Windows on it  To install Windows you will have to choose between Boot Camp (faster, free) and a Virtual Machine (simpler, slower, easier to backup)
    You should try LibreOffice (free), it can open Access files, it may not have all Access's feature set though, worth a try.
    www.libreoffice.org

  • What are the best settings for exporting video in Premiere Pro Cs5.5 in HD MP4 format?

    What are the best settings for exporting video in Premiere Pro Cs5.5 in HD MP4 format?
    Any help would be greatly appreciated! Thanks

    function(){return A.apply(null,[this].concat($A(arguments)))}
    bmiller0525 wrote:
    What are the best settings for exporting video in Premiere Pro Cs5.5 in HD MP4 format?
    Any help would be greatly appreciated! Thanks
    thats way too vague of a question... best what?  quality? size? playback?
    best quality then use high bitrate, small size use low bitrate.

  • What are the best options for running windows on a Mac Pro

    what are the best options for running windows on a Mac Pro

    For gaming and other CPU intensive stuff: Boot Camp – https://www.apple.com/support/bootcamp/
    For the rest: any virtualization software (Parallels/Vmware Fusion/VirtualBox)

  • What are the best settings for video  compression if my end result is for use as  progressive .flv

    What are the best settings for video compression if my end result is for use as  progressive .flv.?
    Thanks,
    KIM

    Use the Final Cut Studio forum
    http://discussions.apple.com/forum.jspa?forumID=939

Maybe you are looking for

  • Will the iphone 5 support aws frequency?

    will the upcoming iphone 5 support t-mobile, wind or mobilicity?

  • How to hide a network volume on the finder.

    We use a java application to manage advertising. This application must reach a smb network volume to access to the files. On the other hand we do not want that the users can access to this network volume (to see this network volume on the finder). It

  • Automatically Installing Custom Step Types in TestStand 4.2

    We have developed Custom Step types that needs to be integrated in a deployed TestStand System (customized UI with TestStand RunTime Engine) . We can customize the palette manually and add the related INI file. We also understand that the information

  • AppBuilder :Getting Error While importing the Project to Eclipse IDE ?

    Hi Experts, While importing the project from AppBuilder to Eclipse IDE following error i'm getting, Error : Cordova Activity Cannot be resolved to a type, Can someone help me to solve this. Thanks in advance, Vamsi K.

  • Gmail/yahoo over Wifi

    I have put into place (2) WAP321 wifi routers and users are not able to check gmail or yahoo accounts when connected wifi.  I have employees on laptops and when they are using a wired connection they can access these accounts without any problems but