Partition in production
Hi guys,
is it advisable to partition in production or do we need to transport the partitioning too...
if so how any help
your help will be greatly appreciated
Hi
Partitions in BW are of 2 types Logical Partitioning & Table (DB) Partitioning.
Following are the pros n cons of both
Logical Partitioning
1. Pros
i. Requires Multi-cube for an enterprise wide view
ii. Multi-cubes facilitate parallel processing
iii. Can be used in conjunction with database partitioning
iv. Reduces the size of InfoCubes
1. Cons
i. Requires additional development (multi-cube, transitioning queries to multi-cubes)
ii. Requires a change in existing documentation
iii. Data model changes have to be implemented on each base data target
iv. Challenging to keep all base target in sync
Database Partitioning
1. Pros
i. Partitioning Pruning for Query run-time
ii. Improvement on query performance
iii. InfoCube Compression by combining all requests into 1 request
1. Cons
i. The performance gain is only achieved for the partitioned InfoCube if the time dimension of the InfoCube is consistent. This means that with a partition using 0CALMONTH, all values of the 0CAL x characteristics of a data record have to match in the time dimension.
ii. Limited ways to delete data since compression has occurred (Selective deletion or DBA manual deletion)
so u have to choose depending on ur requirement
And u have to do the Partition in production it cannot b transported
-Transaction SPRO (IMG)
-Business Information Warehouse > Links to Other Systems > Maintain Control Parameters for the Data Transfer
- OR Transaction RSCUSTV6
Hope this helps to solve ur question
Sonal...
Similar Messages
-
BLOB column in own tablespace, in partition, in table, tablespace to be moved
Hi All,
First off I am using Oracle Database 11.2.0.2 on AIX 5.3.
We have a table that is partitioned monthly.
In this table there is a partition (LOWER), this lower partition is 1.5TB in size due to a BLOB column called (ATTACHMENT).
The rest of the table is not that big, about 30GB, its the BLOB column that is using up all the space.
The lower partition is in its own default tablespace (DefaultTablespace), the BLOB column in the lower partition is also in its own tablespace(TABLESPACE_LOB) - 1.5TB
I've been asked to free up some space by moving the TABELSPACE_LOB(from the lower partition) to an archive database, confirming the data is there and then removing the lower partition from production.
I don't have enough free space (or time) to do an expdp, I don't think its doable with so much data.
CREATE TABLE tablename
xx VARCHAR2(14 BYTE),
xx NUMBER(8),
xx NUMBER,
ATTACHMENT BLOB,
xx DATE,
xx VARCHAR2(100 BYTE),
xx INTEGER,
LOB (ATTACHMENT) STORE AS (
TABLESPACE DefaultTablespace
ENABLE STORAGE IN ROW
NOCOMPRESS
TABLESPACE DefaultTablespace
RESULT_CACHE (MODE DEFAULT)
PARTITION BY RANGE (xx)
PARTITION LOWER VALUES LESS THAN ('xx')
LOGGING
COMPRESS BASIC
TABLESPACE DefaultTablespace
LOB (ATTACHMENT) STORE AS (
TABLESPACE TABLESPACE_LOB
ENABLE STORAGE IN ROW
...>>
My idea was to take an datapump export of the table excluding the column ATTACHMENT, using external tables.
Then to create the table on the archive database "with" the column ATTACHMENT.
Import the data only, from what I understand if you use a dump file that has too many columns Oracle will handle it, i'm hoping it will work the other way round.
Then on production make the TABLESPACE_LOB read only and move it to the new file system.
This is a bit more complicated than a normal tablespace move due to how the table is split up.
Any advice would be very much appreciated.JohnWatson wrote:
If disc space is the problem, would a network mode export/import work for you? I have never tried it with that much data, but theoretically it should work. You could do just a few G at a time.
I see what you are saying, if we use a network link then no redo would be generate on the export, but it would for the import right. But like you said, we could do 100GB per day for the next ten days and that would be very doable I think, it would just take a long time. On the archive database we backup archivelogs every morning so anything generate on the import would be backed up to tape the following morning.
mtefft wrote:
Does it contain only that partition? Or are there other partitions in there as well? If there are other partitions, what % of the space is used by the partition you are trying to move?
Yep, tablespace_lob only contains the LOWER partition, no other partitions. Just the LOWER partition is taking up 1.5TB. -
What is this Partition type mention in code "Partition by HASH".
Hi Team,
Regularly i am Adding new partions and sub-Partitions to production table, based on Date. For example Every Day Data stored in one partion.
please find below code, what i am using to add new partitions. I think this called RANGE partition.
CREATE TABLE "owner"."TABLE_NAME"
( "COLUMN01" VARCHAR2(4),
"ACOLUMN02" VARCHAR2(32) NOT NULL ENABLE,
BUFFER_POOL DEFAULT)
TABLESPACE "Tablespace_name"
PARTITION BY RANGE ("Daily_TIME")
(PARTITION "ABC_2008_08_31" VALUES LESS THAN (TO_DATE(' 2008-08-31 23:59:59', '
SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
Now i have found new type of code from one of the new table created by development team.
Code is.....
CREATE TABLE "owner"."TABLE_NAME"
( "COLUMN01" VARCHAR2(4),
"ACOLUMN02" VARCHAR2(32) NOT NULL ENABLE,
BUFFER_POOL DEFAULT)
TABLESPACE "Tablespace_name"
PARTITION BY HASH ("ACCOUNT_NUMBER")
(PARTITION "PART_P01"
TABLESPACE "tABLESPACE01",
PARTITION "pART_P13"
TABLESPACE "Tabelspace01") ENABLE ROW MOVEMENT;
There is no below code in new Table code....
( (PARTITION "ABC_2008_08_31" VALUES LESS THAN (TO_DATE(' 2008-08-31 23:59:59', '
SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
SO, i am unable to alter this table to add new partions monthly wise.
Please suggest me, How to save data date wise in this table. Also suggest me if it is not comes under RANGE Partion or not.
If not possible to add new partition data wise, I will inform to client.
Thanks & Regards,
VenkatNew table use hash partitioning, not range partitioning. You can refer to the Concepts:
http://docs.oracle.com/cd/E11882_01/server.112/e25789/schemaob.htm#CNCPT88864
The key of new partitioned table is account number, not date, therefore you cannot partition this table date-wise. -
Create all jobs in own partition or global
Dear Friends,
I have created own partition in production to schedule all jobs and using GLOBAL partition in QA system.
I can not do Import in own partition from GLOBAL(QA system) partition.
My concern is whether we can perform all activities in own partition same as GLOBAL partition.
OR we need to use GLOBAL partition in production instead of own partition.
Any help is highly appreciated.
Thanks in advance.
Regards
JiggiDear Blom,
Thanks a lot for your helpful answer.
In that case, will change QA partition from Global to Own same as Production.
Regards,
Jiggi -
Advice wanted: adding new HD, partitions etc
Hi everyone,
Just a quick request for tips rather than a problem - my G5 is working 100% fantastic.
I've just ordered a new internal hard drive (Deskstar 500GB) for high def video and will install it in the 2nd bay to sit alongside the current 150GB disk. Question is, how should I manage the new capacity?
For example, I've currently got my iTunes library (40GB and growing) and a bunch of QT movies (70GB and growing) on an external firewire drive which I'd like to bring in somewhere.
My system disk is right now at 85GB used, 63GB free but is usually more crowded than that.
I've read that it's good to keep the system disk to maximum 80% (?) usage to allow plenty for VM.
So do I clone my current system disk to the new 500GB monster and use the resulting excess space, together with the freed up 150GB as my FCP area? Or do I perhaps make a partition just for the system on one of the discs - if so, which one would be best and how big is big enough?
Any tips gratefully received!
Jason
Dual G5 2.5Ghz, Powerbook G4 1.33Ghz, iPod 40GB Mac OS X (10.4.6) 2GB RAMHi Jason;
Let me start by saying that I have usually found that partitioning a drive and then using the partitions for productive work ususally leads to lower performance because the are continually being forced to move back and forth between the partitions.
Personally if it were my system, I would leave your system where it is. If you feel that you need more space on your system disk then you can migrate your music library over to the larger disk later. It will not be necessary to partition to do that.
Allan -
Dynamic calculations in a materialized view
Hi,
I have a problem. I created a materialized view which works perfectly. But now I need to perform some calculations with the data in my view . The view contains the working hours of employees who worked in different projects. Every project has a fixed amount of time (time_available) in which the employee has to finish the project. I aggregated the working hours to month for better presentation. What I want to accomplish here is a "simple" subtraction of "fixed amount for a project" minus "working hours" for the employee.
The problem here is that some project have duration of more that just one month. Naturally all my values are in one tuple for every month. So when I have 3 month of working hours for a project my view looks like this:
MV_Working_Hours:
Project --- Time_Available --- DATE --- Employee --- Working Days
Project A --- 50 Days --- 2011-05 --- Mr. A --- 15 Days
Project A --- 50 Days --- 2011-06 --- Mr. A --- 16 Days
Project A --- 50 Days --- 2011-07 --- Mr. A --- 16 Days
What I want to do is to calculate the remaining days like this :
Project --- Time_Available --- DATE --- Employee --- Working d in Month ---reaming days
Project A --- 50 Days --- 2011-05 --- Mr. A --- 15 Days --- 35 Days
Project A --- 50 Days --- 2011-06 --- Mr. A --- 16 Days --- 19 Days <--- I get here 34 which is for my need wrong !!!
Project A --- 50 Days --- 2011-07 --- Mr. A --- 16 Days --- 3 Days
Is there a way to realize this with "just" sql or do I have to use pl/sql in the OWB? I use the OWB version 11gR2
thxFor everybody who is confronted with the same problem I have - here is the solution: (thx to "spencer7593" and "Justin Cave" from StackOverflow)
SUM( "Working Days" )
OVER (PARTITION BY "Product", "Employee"
ORDER BY "DATE"
ROWS UNBOUNDED PRECEDING)
and please check out the link from oracle for _"SQL for Analysis and Reporting"_: http://download.oracle.com/docs/cd/E14072_01/server.112/e10810/analysis.htm -
Want to add Video and Graphic Card to my computer system
I Notice that one of the users' in the Apple discussion Board is Listed, (AndyO) he has help me many times in the Past and was succesful in my problrm solution, would he might be able to help me with this?
I had Purchased the Game Spore Creature Creator the Starter Kit which is just the Creature creator game and not the actual game itself. I wasn't able to install the the game on my computer because I don't have the right or not at all Video and Graphic Card install on my computer. I had Contacted the Company "EA" that puts out the Game Spore, this is what they said what the game requirements are: The minimum system requirements for Spore and Spore the Galactic Edition for Mac are as follows:
* Mac OS X 10.5.3 Leopard or higher
* Intel Core Duo Processor
* 1024 MB RAM
* At least 345 MB of hard drive space for installation, plus additional space for created creatures. (260 MB for the Trial Edition)
* Video Card - ATI X1600 or NVidia 7300 GT with 128 MB of Video RAM, or Intel Integrated GMA X3100
This game will not run on PowerPC (G3/G4/G5) based Mac systems (PowerMac).
For computers using built-in graphics chipsets, the game requires at least: an Intel Integrated Chipset GMA X3100 or Dual 2.0GHz CPUs, or 1.7GHz Core 2 Duo, or equivalent
Supported Video Cards
ATI Radeon(TM) series
* X1600, X1900, HD 2400, HD 2600
NVIDIA GeForce series
* 7300, 7600, 8600, 8800
Intel(R) Extreme Graphics
* GMA X3100
and that I need to upgrade my system. Basically I need; ATI Radeon 1600 or NVIDIA Geforce 7300 and a INTEL Extreme Graphic Video Card-GMA X3100, in order to be able to install and play this game. Since I already have this game, I would like to be able to upgrade my computer system which brings me to ask; does somone know or refer me to a web site that has and sells these Video and Graphic Card and supports for the Mini Mac?
Plus they say that I need a 1020 of MB of Ram, I Have already found that 1024 MB of Ram but I just need those Video and Graphic Cards for my computer system. I am adding my computer system Profile to my discussion board to show you what I have on my system, would you be able to tell me if I would be able to add these Video and Graphic card and the memory or Ram 1024 to my computer system?
Note: Here is a web site that I am looking at to add the 1024 MB of Ram to my computer system:
http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=24188 41&CatId=2453
what do you think?
Here is part a copy info of my computer system Profile:
Trisha Foster’s Mac mini
10/10/08 5:39 PM
Hardware:
Hardware Overview:
Model Name: Mac mini
Model Identifier: Macmini2,1
Processor Name: Intel Core 2 Duo
Processor Speed: 2 GHz
Number Of Processors: 1
Total Number Of Cores: 2
L2 Cache: 4 MB
Memory: 1 GB
Bus Speed: 667 MHz
Boot ROM Version: MM21.009A.B00
SMC Version: 1.19f2
Serial Number: YM8073Q**
Network:
AirPort:
Type: AirPort
Hardware: AirPort
BSD Device Name: en1
IPv4 Addresses: 169.254.247.182
IPv4:
Addresses: 169.254.247.182
Configuration Method: DHCP
Interface Name: en1
Subnet Masks: 255.255.0.0
IPv6:
Configuration Method: Automatic
Proxies:
Exceptions List: *.local, 169.254/16
FTP Passive Mode: Yes
Ethernet:
MAC Address: 00:1f:5b:3e:ce:93
Media Options:
Media Subtype: Auto Select
Bluetooth:
Type: PPP (PPPSerial)
Hardware: Modem
BSD Device Name: Bluetooth-Modem
IPv4:
Configuration Method: PPP
IPv6:
Configuration Method: Automatic
Proxies:
FTP Passive Mode: Yes
Ethernet:
Type: Ethernet
Hardware: Ethernet
BSD Device Name: en0
IPv4 Addresses: 192.168.1.102
IPv4:
Addresses: 192.168.1.102
Configuration Method: DHCP
Interface Name: en0
NetworkSignature: IPv4.Router=192.168.1.1;IPv4.RouterHardwareAddress=00:21:29:c3:12:ae
Router: 192.168.1.1
Subnet Masks: 255.255.255.0
IPv6:
Configuration Method: Automatic
DNS:
Domain Name: cruzio.com
Server Addresses: 74.220.64.45, 74.220.64.55
DHCP Server Responses:
Domain Name: cruzio.com
Domain Name Servers: 74.220.64.45,74.220.64.55
Lease Duration (seconds): 0
DHCP Message Type: 0x05
Routers: 192.168.1.1
Server Identifier: 192.168.1.1
Subnet Mask: 255.255.255.0
Proxies:
Exceptions List: *.local, 169.254/16
FTP Passive Mode: Yes
Ethernet:
MAC Address: 00:16:cb:af:11:7f
Media Options: Full Duplex, flow-control
Media Subtype: 100baseTX
FireWire:
Type: FireWire
Hardware: FireWire
BSD Device Name: fw0
IPv4:
Configuration Method: DHCP
IPv6:
Configuration Method: Automatic
Proxies:
Exceptions List: *.local, 169.254/16
FTP Passive Mode: Yes
Ethernet:
MAC Address: 00:1f:5b:ff:fe:17:17:2a
Media Options: Full Duplex
Media Subtype: Auto Select
Software:
System Software Overview:
System Version: Mac OS X 10.5.5 (9F33)
Kernel Version: Darwin 9.5.0
Boot Volume: Macintosh HD
Boot Mode: Normal
Computer Name: Trisha Foster’s Mac mini
User Name: Trisha Foster (tiger)
Time since boot: 59 minutes
ATA:
ATA Bus:
PIONEER DVD-RW DVR-K06:
Capacity: 423.4 MB
Model: PIONEER DVD-RW DVR-K06
Revision: Q614
Removable Media: Yes
Detachable Drive: No
BSD Name: disk2
Protocol: ATAPI
Unit Number: 0
Socket Type: Internal
Low Power Polling: Yes
Mac OS 9 Drivers: No
Partition Map Type: Unknown
S.M.A.R.T. status: Not Supported
Volumes:
SPORE:
Capacity: 368.7 MB
Media Type: CD-ROM
Writable: No
File System: ISO Rockridge
BSD Name: disk2s0
Mount Point: /Volumes/SPORE
Audio (Built In):
Intel High Definition Audio:
Device ID: 0x83847680
Audio ID: 8
Available Devices:
Headphone:
Connection: Combo
Speaker:
Connection: Internal
Line In:
Connection: Combo
S/P-DIF Out:
Connection: Combo
S/P-DIF In:
Connection: Combo
Bluetooth:
Apple Bluetooth Software Version: 2.1.0f17
Hardware Settings:
Trisha Foster’s Mac mini:
Address: 00-1f-5b-72-12-aa
Manufacturer: Cambridge Silicon Radio
Firmware Version: 3.1965 (3.1965)
Bluetooth Power: On
Discoverable: Yes
HCI Version: 3 ($3)
HCI Revision: 1965 ($7ad)
LMP Version: 3 ($3)
LMP Subversion: 1965 ($7ad)
Device Type (Major): Computer
Device Type (Complete): Macintosh Desktop
Composite Class Of Device: 3154180 ($302104)
Device Class (Major): 1 ($1)
Device Class (Minor): 1 ($1)
Service Class: 385 ($181)
Requires Authentication: No
Services:
Bluetooth File Transfer:
Folder other devices can browse: ~/Public
Requires Authentication: Yes
State: Enabled
Bluetooth File Exchange:
Folder for accepted items: ~/Downloads
Requires Authentication: No
When other items are accepted: Ask
When PIM items are accepted: Ask
When receiving items: Prompt for each file
State: Enabled
Incoming Serial Ports:
Serial Port 1:
Name: Bluetooth-PDA-Sync
RFCOMM Channel: 3
Requires Authentication: No
Outgoing Serial Ports:
Serial Port 1:
Address:
Name: Bluetooth-Modem
RFCOMM Channel: 0
Requires Authentication: No
Diagnostics:
Power On Self-Test:
Last Run: 10/10/08 4:41 PM
Result: Passed
Disc Burning:
TEAC CD-W540E:
Firmware Revision: 1.0F
Interconnect: FireWire
Burn Support: Yes (Apple Supported Drive)
Cache: 8192 KB
Reads DVD: No
CD-Write: -R, -RW
Write Strategies: CD-TAO, CD-SAO, CD-Raw
Media: Insert media and refresh to show available burn speeds
PIONEER DVD-RW DVR-K06:
Firmware Revision: Q614
Interconnect: ATAPI
Burn Support: Yes (Apple Shipping Drive)
Cache: 2000 KB
Reads DVD: Yes
CD-Write: -R, -RW
DVD-Write: -R, -R DL, -RW, +R, +R DL, +RW
Write Strategies: CD-TAO, CD-SAO, CD-Raw, DVD-DAO
Media:
Type: CD-ROM
Blank: No
Erasable: No
Overwritable: No
Appendable: No
FireWire:
FireWire Bus:
Maximum Speed: Up to 400 Mb/sec
OXFORD IDE Device LUN 0:
Manufacturer: Oxford Semiconductor Ltd.
Model: 0x42A258
GUID: 0x1D200500648AF
Maximum Speed: Up to 400 Mb/sec
Connection Speed: Up to 400 Mb/sec
Sub-units:
OXFORD IDE Device LUN 0 Unit:
Unit Software Version: 0x10483
Unit Spec ID: 0x609E
Firmware Revision: 0x444133
Product Revision Level: 1.0F
Sub-units:
OXFORD IDE Device LUN 0 SBP-LUN:
Graphics/Displays:
Intel GMA 950:
Chipset Model: GMA 950
Type: Display
Bus: Built-In
VRAM (Total): 64 MB of shared system memory
Vendor: Intel (0x8086)
Device ID: 0x27a2
Revision ID: 0x0003
Displays:
L1916HW:
Resolution: 1280 x 800 @ 60 Hz
Depth: 32-bit Color
Core Image: Hardware Accelerated
Main Display: Yes
Mirror: Off
Online: Yes
Quartz Extreme: Supported
Rotation: Supported
Memory:
BANK 0/DIMM0:
Size: 512 MB
Type: DDR2 SDRAM
Speed: 667 MHz
Status: OK
Manufacturer: 0xAD00000000000000
Part Number: 0x48594D503536345336344350362D59352020
Serial Number: 0x0000**
BANK 1/DIMM1:
Size: 512 MB
Type: DDR2 SDRAM
Speed: 667 MHz
Status: OK
Manufacturer: 0xAD00000000000000
Part Number: 0x48594D503536345336344350362D59352020
Serial Number: 0x00001*
Power:
System Power Settings:
AC Power:
System Sleep Timer (Minutes): 10
Disk Sleep Timer (Minutes): 10
Display Sleep Timer (Minutes): 10
Sleep On Power Button: Yes
Automatic Restart On Power Loss: No
Wake On LAN: Yes
Hardware Configuration:
UPS Installed: No
Printers:
Canon MP830:
Status: Idle
Print Server: Local
Driver Version: 4.8.3
Default: Yes
URI: usb://Canon/MP830?serial=19702D
PPD: Canon MP830
PPD File Version: 1.0
PostScript Version: (3011.104) 0
Serial-ATA:
Intel ICH7-M AHCI:
Vendor: Intel
Product: ICH7-M AHCI
Speed: 1.5 Gigabit
Description: AHCI Version 1.10 Supported
Hitachi HTS541612J9SA00:
Capacity: 111.79 GB
Model: Hitachi HTS541612J9SA00
Revision: SBDAC7MP
Serial Number: SB2EF9L7G3**
Native Command Queuing: Yes
Queue Depth: 32
Removable Media: No
Detachable Drive: No
BSD Name: disk0
Mac OS 9 Drivers: No
Partition Map Type: GPT (GUID Partition Table)
S.M.A.R.T. status: Verified
Volumes:
Macintosh HD:
Capacity: 111.47 GB
Available: 63.14 GB
Writable: Yes
File System: Journaled HFS+
BSD Name: disk0s2
Mount Point: /
USB:
USB High-Speed Bus:
Host Controller Location: Built In USB
Host Controller Driver: AppleUSBEHCI
PCI Device ID: 0x27cc
PCI Revision ID: 0x0002
PCI Vendor ID: 0x8086
Bus Number: 0xfd
Keyboard Hub:
Version: 94.15
Bus Power (mA): 500
Speed: Up to 480 Mb/sec
Manufacturer: Apple, Inc.
Product ID: 0x1006
Serial Number: 000000000000
Vendor ID: 0x05ac (Apple Computer, Inc.)
USB RECEIVER:
Version: 25.10
Bus Power (mA): 100
Speed: Up to 1.5 Mb/sec
Manufacturer: Logitech
Product ID: 0xc50e
Vendor ID: 0x046d
Apple Keyboard:
Version: 0.69
Bus Power (mA): 100
Speed: Up to 1.5 Mb/sec
Manufacturer: Apple, Inc
Product ID: 0x0220
Vendor ID: 0x05ac (Apple Computer, Inc.)
iPod:
Version: 0.01
Bus Power (mA): 500
Speed: Up to 480 Mb/sec
Manufacturer: Apple Inc.
Product ID: 0x1291
Serial Number: aa1186ca1738fd28eef1eeb1b22754********
Vendor ID: 0x05ac (Apple Computer, Inc.)
USB2.0 Hub:
Version: 7.02
Bus Power (mA): 500
Speed: Up to 480 Mb/sec
Product ID: 0x0606
Vendor ID: 0x05e3
USB 2.0 3.5" DEVICE:
Capacity: 153.39 GB
Removable Media: Yes
Detachable Drive: Yes
BSD Name: disk1
Version: 0.01
Bus Power (mA): 500
Speed: Up to 480 Mb/sec
Manufacturer: Macpower Technology Co.LTD.
Mac OS 9 Drivers: Yes
Partition Map Type: APM (Apple Partition Map)
Product ID: 0x0073
Serial Number: BC0*
S.M.A.R.T. status: Not Supported
Vendor ID: 0x0dc4
Volumes:
miniStack:
Capacity: 153.26 GB
Available: 42.96 GB
Writable: Yes
File System: Journaled HFS+
BSD Name: disk1s10
Mount Point: /Volumes/miniStack
MP830:
Version: 1.15
Bus Power (mA): 500
Speed: Up to 480 Mb/sec
Manufacturer: Canon
Product ID: 0x1713
Serial Number: 197*
Vendor ID: 0x04a9
USB Bus:
Host Controller Location: Built In USB
Host Controller Driver: AppleUSBUHCI
PCI Device ID: 0x27c8
PCI Revision ID: 0x0002
PCI Vendor ID: 0x8086
Bus Number: 0x1d
USB Bus:
Host Controller Location: Built In USB
Host Controller Driver: AppleUSBUHCI
PCI Device ID: 0x27c9
PCI Revision ID: 0x0002
PCI Vendor ID: 0x8086
Bus Number: 0x3d
USB Bus:
Host Controller Location: Built In USB
Host Controller Driver: AppleUSBUHCI
PCI Device ID: 0x27ca
PCI Revision ID: 0x0002
PCI Vendor ID: 0x8086
Bus Number: 0x5d
USB Bus:
Host Controller Location: Built In USB
Host Controller Driver: AppleUSBUHCI
PCI Device ID: 0x27cb
PCI Revision ID: 0x0002
PCI Vendor ID: 0x8086
Bus Number: 0x7d
Bluetooth USB Host Controller:
Version: 19.65
Bus Power (mA): 500
Speed: Up to 12 Mb/sec
Manufacturer: Apple, Inc.
Product ID: 0x8205
Vendor ID: 0x05ac (Apple Computer, Inc.)
IR Receiver:
Version: 1.10
Bus Power (mA): 500
Speed: Up to 12 Mb/sec
Manufacturer: Apple Computer, Inc.
Product ID: 0x8240
Vendor ID: 0x05ac (Apple Computer, Inc.)
AirPort Card:
AirPort Card Information:
Wireless Card Type: AirPort Extreme (0x168C, 0x86)
Wireless Card Locale: USA
Wireless Card Firmware Version: 1.4.4
Current Wireless Network: Trisha's iPod
Wireless Channel: 11
Firewall:
Firewall Settings:
Mode: Allow all incoming connections
Locations:
Automatic:
Active Location: Yes
Services:
AirPort:
Type: IEEE80211
BSD Device Name: en1
Hardware (MAC) Address: 00:1f:5b:3e:ce:93
IPv4:
Configuration Method: DHCP
IPv6:
Configuration Method: Automatic
AppleTalk:
Configuration Method: Node
Proxies:
Exceptions List: *.local, 169.254/16
FTP Passive Mode: Yes
IEEE80211:
Join Mode: Automatic
JoinModeFallback: Prompt
PowerEnabled: 1
PreferredNetworks:
SecurityType: Open
SSID_STR: Apple Store
Unique Network ID: 4BCBEE2D-83B9-4CC1-B110-8D0E8E5DF49C
SecurityType: Open
SSID_STR: linksys
Unique Network ID: F2551FD6-95A7-4F98-9F7F-758A89DBD931
Bluetooth:
Type: PPP
IPv4:
Configuration Method: PPP
IPv6:
Configuration Method: Automatic
Proxies:
FTP Passive Mode: Yes
PPP:
ACSP Enabled: No
Display Terminal Window: No
Redial Count: 1
Redial Enabled: Yes
Redial Interval: 5
Use Terminal Script: No
Dial On Demand: No
Disconnect On Fast User Switch: Yes
Disconnect On Idle: Yes
Disconnect On Idle Time: 600
Disconnect On Logout: Yes
Disconnect On Sleep: Yes
Idle Reminder: No
Idle Reminder Time: 1800
IPCP Compression VJ: Yes
LCP Echo Enabled: No
LCP Echo Failure: 4
LCP Echo Interval: 10
Log File: /var/log/ppp.log
Verbose Logging: No
Ethernet:
Type: Ethernet
BSD Device Name: en0
Hardware (MAC) Address: 00:16:cb:af:11:7f
IPv4:
Configuration Method: DHCP
IPv6:
Configuration Method: Automatic
AppleTalk:
Configuration Method: Node
Proxies:
Exceptions List: *.local, 169.254/16
FTP Passive Mode: Yes
FireWire:
Type: FireWire
BSD Device Name: fw0
Hardware (MAC) Address: 00:1f:5b:ff:fe:17:17:2a
IPv4:
Configuration Method: DHCP
IPv6:
Configuration Method: Automatic
Proxies:
Exceptions List: *.local, 169.254/16
Can someone Please help me with any suggestions or otherwisde I will have to go to a Apple store and ask for help there.
Thank You,
Trisha Foster
<Edited by Moderator>Hi Trisha,
The only modern Mac's with video cards are the Mac Pro and Xserve (server) systems. They both are priced quite a bit higher than any Mac Mini.
Some systems, like the Mac Book Pro laptop can be ordered with different graphics cards, but that has be done when the system is built, as it is not replaceable once the system is built. To find out which systems, go to store.apple.com. For each system that you select, you can see if there's an option for a better video card.
David -
How to use data densification?
I recently attended an Online Oracle Developer Day. In the presentation on Data Warehousing Best Practices, given by Rekha Balwada, there was mention of data densification. I attempted to use it in my environment, without success. Below is the code for my test case, based on the example given in the presentation (slide 57 of 59). I would very much appreciate any help in understanding why my test case does not work.
I am currently working in Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production.
The example used two tables, Inventory and Times. Here is the DDL I used to create and populate the tables, and below that the query I ran (copied from the presentation). The query will create a row for every row in the Times table, but the quant column is left null unless there is actual data for that day, which is not what I expected from the presentation. The presentation suggested that the last number for the Quant column would be repeated for each day where there was not an actual row in the table.
The presentation results (copied from the slide pdf "datawarehousing_developer_13361721729343882"):
Times_id Product Quant
01-APR-2011 Bottle 10
02-APR-2011 Bottle 10
03-APR-2011 Bottle 10
04-APR-2011 Bottle 10
05-APR-2011 Bottle 8
01-APR-2011 Can 15
02-APR-2011 Can 15
03-APR-2011 Can 15
04-APR-2011 Can 11
05-APR-2011 Can 11
-- File created - Thursday-May-24-2012
-- DDL for Table TIMES
CREATE TABLE "TIMES"
( "TIME_ID" DATE
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 163840 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ;
REM INSERTING into TIMES
SET DEFINE OFF;
Insert into TIMES (TIME_ID) values (to_date('2012-04-01 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-02 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-03 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-04 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-05 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-06 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-07 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-08 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-09 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-10 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-11 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-12 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-13 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-14 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-15 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-16 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-17 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-18 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-19 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-20 00.00.00','YYYY-MM-DD HH24.MI.SS'));
Insert into TIMES (TIME_ID) values (to_date('2012-04-21 00.00.00','YYYY-MM-DD HH24.MI.SS'));
-- Constraints for Table TIMES
ALTER TABLE "TIMES" MODIFY ("TIME_ID" NOT NULL ENABLE);
-- File created - Thursday-May-24-2012
-- DDL for Table INVENTORY
CREATE TABLE "INVENTORY"
( "TIME_ID" DATE,
"PRODUCT" VARCHAR2(20 BYTE),
"QUANT" NUMBER
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 163840 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ;
REM INSERTING into INVENTORY
SET DEFINE OFF;
Insert into INVENTORY (TIME_ID,PRODUCT,QUANT) values (to_date('2012-04-01 00.00.00','YYYY-MM-DD HH24.MI.SS'),'Bottle',10);
Insert into INVENTORY (TIME_ID,PRODUCT,QUANT) values (to_date('2012-04-01 00.00.00','YYYY-MM-DD HH24.MI.SS'),'Can',15);
Insert into INVENTORY (TIME_ID,PRODUCT,QUANT) values (to_date('2012-04-05 00.00.00','YYYY-MM-DD HH24.MI.SS'),'Bottle',8);
Insert into INVENTORY (TIME_ID,PRODUCT,QUANT) values (to_date('2012-04-04 00.00.00','YYYY-MM-DD HH24.MI.SS'),'Can',11);
------------------------------------------------------------------------ Here is the query I ran (also copied from the slide pdf):
SELECT
times.time_id,
product,
quant
FROM
inventory PARTITION BY(product)
RIGHT OUTER JOIN times
ON
times.time_id = inventory.time_id
---------------------------------------------------------------------------- Here are my results.
Times_id Product Quant
2012-04-01 00.00.00 Bottle 10
2012-04-02 00.00.00 Bottle
2012-04-03 00.00.00 Bottle
2012-04-04 00.00.00 Bottle
2012-04-05 00.00.00 Bottle 8
2012-04-06 00.00.00 Bottle
2012-04-07 00.00.00 Bottle
2012-04-08 00.00.00 Bottle
2012-04-09 00.00.00 Bottle
2012-04-10 00.00.00 Bottle
2012-04-11 00.00.00 Bottle
2012-04-12 00.00.00 Bottle
2012-04-13 00.00.00 Bottle
2012-04-14 00.00.00 Bottle
2012-04-15 00.00.00 Bottle
2012-04-16 00.00.00 Bottle
2012-04-17 00.00.00 Bottle
2012-04-18 00.00.00 Bottle
2012-04-19 00.00.00 Bottle
2012-04-20 00.00.00 Bottle
2012-04-21 00.00.00 Bottle
2012-04-01 00.00.00 Can 15
2012-04-02 00.00.00 Can
2012-04-03 00.00.00 Can
2012-04-04 00.00.00 Can 11
2012-04-05 00.00.00 Can
2012-04-06 00.00.00 Can
2012-04-07 00.00.00 Can
2012-04-08 00.00.00 Can
2012-04-09 00.00.00 Can
2012-04-10 00.00.00 Can
2012-04-11 00.00.00 Can
2012-04-12 00.00.00 Can
2012-04-13 00.00.00 Can
2012-04-14 00.00.00 Can
2012-04-15 00.00.00 Can
2012-04-16 00.00.00 Can
2012-04-17 00.00.00 Can
2012-04-18 00.00.00 Can
2012-04-19 00.00.00 Can
2012-04-20 00.00.00 Can
2012-04-21 00.00.00 Can
Thank you for your time,
Vin Steele>
SELECT
times.time_id,
product,
quant
FROM
inventory PARTITION BY(product)
RIGHT OUTER JOIN times
ON
times.time_id = inventory.time_id
>
The right outer join means that data for the 'times' table will have NULL values representing the 'time_id' values that do not exist in the 'inventory' table.
There is nothing in the query or data that would create missing values.
Another anomaly is that there is no 'product' table. That table would normally hold the master list of products perhaps using product_id as the primary key.
Then a cartesian join between the 'times' table and the 'product' table would produce the dense set of dimensions for the matrix. This join could seed each matrix data value with a zero (0) for quantity (if NULL is not desired) which would be used for products that are not represented at all in the 'inventory' table or are not represented for particular days.
The matrix data values for actual data would be added to the zero cell values to get the final value for the cell. -
Trying to understand the MODEL clause
Hi All,
I'm finally biting the bullet and learning how to use the model clause, but I'm having a bit of trouble.
The following example data comes from a book "Advanced CQL Functions in Oracle 10g".
with sales1 as (select 'blueberries' product
,'pensacola' location
,9000 amount
,2001 year
from dual
union all
select 'cotton', 'pensacola',16000,2001 from dual
union all
select 'lumber','pensacola',3500,2001 from dual
union all
select 'cotton','mobile',24000,2001 from dual
union all
select 'lumber', 'mobile',2800,2001 from dual
union all
select 'plastic','mobile',32000,2001 from dual
union all
select 'blueberries','pensacola',9000,2002 from dual
union all
select 'cotton', 'pensacola',16000,2002 from dual
union all
select 'lumber','pensacola',3500,2002 from dual
union all
select 'cotton','mobile',24000,2002 from dual
union all
select 'lumber', 'mobile',2800,2002 from dual
union all
select 'plastic','mobile',32000,2002 from dual
union all
select 'blueberries','pensacola',9000,2003 from dual
union all
select 'cotton', 'pensacola',16000,2003 from dual
union all
select 'lumber','pensacola',3500,2003 from dual
union all
select 'cotton','mobile',24000,2003 from dual
union all
select 'lumber', 'mobile',2800,2003 from dual
union all
select 'plastic','mobile',32000,2003 from dual
select location, product, year, s
from sales1
model
--return updated rows
partition by (product)
dimension by (location,year)
measures (amount s) ignore nav
(s['pensacola',2003] = sum(s)['pensacola',cv() > cv()-1])I would have expected the measures clause to return the sum of all amounts for pensacola where the year > 2003 - 1 = 2002. which would make the total for [blueberries,2003] = 1800, but instead it comes out as 27000, apparently summing all values for blueberries for that partition.... equivalent to sum(s)['pensacola',ANY].
how would I go about making s['pensacola',2003] = the sum of itself plus the previous row?
I realise I can do
s['pensacola',cv()]+s['pensacola',cv()-1]but I'm really trying to understand why what I have doesn't appear to work the way I expect.Because
(s['pensacola',2003] = sum(s)['pensacola',cv() > cv()-1])
means
(s['pensacola',2003] = sum(s)['pensacola',cv(year) > cv(year)-1])
means
(s['pensacola',2003] = sum(s)['pensacola',2003 > 2003-1])
means
(s['pensacola',2003] = sum(s)['pensacola',2003 > 2002])
means
(s['pensacola',2003] = sum(s)['pensacola',year is any])
s['pensacola',cv()]+s['pensacola',cv()-1]
means
sum(s)['pensacola',year between cv()-1 and cv()] -
Does Position Of Cube Dimension Matter While Adding it as Measure
I've created a cube on the Adventure Works Database. And when tried to process the Cube i get error
Errors in the OLAP storage engine: The attribute key cannot be found when processing: Table: 'dbo_DimProduct', Column: 'ProductKey', Value: '1'. The attribute is 'Product Key'. Errors in the OLAP storage engine: The attribute key was converted to an unknown
member because the attribute key was not found. Attribute Product Key of Dimension: Dim Product from Database: Analysis Services Project, Cube: Adventure Works DW, Measure Group: Dim Product, Partition: Dim Product, Record: 1. Errors in the OLAP storage engine:
The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation. Errors in the OLAP storage engine: An error occurred while processing the 'Dim Product' partition of the
'Dim Product' measure group for the 'Adventure Works DW' cube from the Analysis Services Project database.
This is faced When Dimproduct Dimension was added first and later Dimtime Dimension.
I delete Dim Product Database Dimension and have Added it again as both Measure and Cube Dimension and it process succesfully.
Only Difference i've Found is now Dimproduct is listed last in cube dimensions.
Question - Does position of Cube Dimension matters when adding its a Measure?
And what is causing this error?
HSHi HS,
Since this issue is related to Analysis Services, I will move this thread to Analysis Services forum. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated.
Thank you for your understanding and support.
Regards,
Katherine Xiong
Katherine Xiong
TechNet Community Support -
Macbook Pro 15" + DROBO FW800 + KONTAKT PLAYER?
Hi Everyone,
I'm use my Macbook mostly for preproduction and programming.
I want to purchase Kontakt Komplete and East/West Quantum Symphony Platinum. Each of these requires about 100GB of space to host the instruments on your hard drive, which is what I want (I figure i won't utilise the packages completely if I don't have instant access to all of their features.
This also means that I should start preparing for a work-around solution for future purchases as well, given that I'm probably going to want more instruments and my first purchase of a Mac Pro will only occur in about two years, realistically.
What I was thinking was this:
DROBO 4 Hard Drive with 3 x 1 TB Hard Drives - These would be Western Digital Caviar Black Firewire 800 Drives (7200RPM):
1 Hard Drive I use for my Instrument Libraries
1 Hard Drive I use for storage
1 Hard Drive for back up (duplicate of storage)
My current workflow is that I use a small 120GB partition for production (the other 250GB is for the System) - this partition is ONLY for Logic, so that searching is easier for the processor. As soon as that partition gets a bit full, I copy the entire drive to a hard drive, duplicate that and then format - ready to start from fresh. I also run Time Machine.
So my question is this:
Would that drive mentioned above be able to handle the libraries in real time? i.e. could I have all of my instruments stored on an external hard drive and work seamlessly?
Thanks for your help!
JamesEven if you had the space on your internal drive, it could never cope with EWSO platinum if you started opening up more than a few instruments, so external is the way to go.
I am unfamiliar with the drive setup you are talking about, but I take it that all of the drives would be sharing a single FW800 port... all your VI samples, audio tracks etc.
Hmm. FW800 is good, but have you investigated eSATA? On my MBP there is a slot which can accommodate an eSATA card, which would give you throughput nearly matching an internal SATA drive, somewhere in the order of 3mbs. Now we're talking. Even on my MacPro, hard drive speed became an issue with a lot of orchestral VI's open, so I'd be looking into it a bit more, and asking a few more questions. Ideally from someone who knows more than me. -
Query for using "analytical functions" in DWH...
Dear team,
I would like to know if following task can be done using analytical functions...
If it can be done using other ways, please do share the ideas...
I have table as shown below..
Create Table t As
Select *
From
Select 12345 PRODUCT, 'W1' WEEK, 10000 SOH, 0 DEMAND, 0 SUPPLY, 0 EOH From dual Union All
Select 12345, 'W2', 0, 100, 50, 0 From dual Union All
Select 12345, 'W3', 0, 100, 50, 0 From dual Union All
Select 12345, 'W4', 0, 100, 50, 0 From dual
PRODUCT WEEK SOH DEMAND SUPPLY EOH
12345 W1 10,000 0 0 10000
12345 W2 0 100 50 0
12345 W3 0 100 50 0
12345 W4 0 100 50 0
Now i want to calcuate EOH (ending on hand) quantity for W1...
This EOH for W1 becomes SOH (Starting on hand) for W2...and so on...till end of weeks
The formula is :- EOH = SOH - (DEMAND + SUPPLY)
The output should be as follows...
PRODUCT WEEK SOH DEMAND SUPPLY EOH
12345 W1 10,000 10000
12345 W2 10,000 100 50 9950
12345 W3 9,950 100 50 9900
12345 W4 9,000 100 50 8950
Kindly share your ideas...Nicloei W wrote:
Means SOH_AFTER_SUPPLY for W1, should be displayed under SOH FOR W2...i.e. SOH for W4 should be SOH_AFTER_SUPPLY for W3, right?
If yes, why are you expecting it to be 9000 for W4??
So in output should be...
PRODUCT WE SOH DEMAND SUPPLY EOH SOH_AFTER_SUPPLY
12345 W1 10000 0 0 0 10000
12345 W2 10000 100 50 0 9950
12345 W3 9950 100 50 0 *9900*
12345 W4 *9000* 100 50 0 9850
per logic you explained, shouldn't it be *9900* instead???
you could customize Martin Preiss's logic for your requirement :
SQL> with
2 data
3 As
4 (
5 Select 12345 PRODUCT, 'W1' WEEK, 10000 SOH, 0 DEMAND, 0 SUPPLY, 0 EOH Fom dual Union All
6 Select 12345, 'W2', 0, 100, 50, 0 From dal Union All
7 Select 12345, 'W3', 0, 100, 50, 0 From dal Union All
8 Select 12345, 'W4', 0, 100, 50, 0 From dual
9 )
10 Select Product
11 ,Week
12 , Sum(Soh) Over(Partition By Product Order By Week)- Sum(Supply) Over(Parttion By Product Order By Week)+Supply Soh
13 ,Demand
14 ,Supply
15 , Sum(Soh) Over(Partition By Product Order By Week)- Sum(Supply) Over(Partition By Product Order By Week) eoh
16 from data;
PRODUCT WE SOH DEMAND SUPPLY EOH
12345 W1 10000 0 0 10000
12345 W2 10000 100 50 9950
12345 W3 9950 100 50 9900
12345 W4 9900 100 50 9850 Vivek L -
I'm trying to create two realms that are partitioned pretty heavily, separating production from development, but using the same access manager infrastructure. This seems to fail, in that when I login to the realm I've partitioned for development, I can then login with that identity to the resources I've partitioned for production. How can I prevent this?
Details
I've created two realms. TCPIP.COM.Development and TCPIP.COM.Production. Both realms are sub-realms of amroot. The Development realm points to a development LDAP server for the data store and the development ldap for the password authentication modules. The passwords are predictable so developers can simulate access as other users. In the production realm, the data store points to the production LDAP and the authentication modules uses the production passwords.
The user id's in both data stores are the same, since development mirrors production as close as possible. The passwords are different, group membership is different, roles are different, etc. in order to debug, test applications ad they are developed.
I've configured CDSSO, and setup the agents in the root realm (amroot). The root realm delegates through a policy referral the resources for the applicable realms, in order to replay the cookies, I set the cookie scope in platform properties to the entire domain (tcpip.com).
If I access a resource with a policy in the development realm I'm redirected to login with the authentication module that uses the development LDAP I can print the organization in the session properties and see:
Organization is o=TCPIP.COM.Development,ou=services,ou=amroot,ou=admin,ou=resource,o=tcpip,c=us
when I access the agent protecting a resource in the production realm, the same user is printed and I don't need to login. The access has been granted with the development credentials.
Only if I expressly add the realm to the login servlet (/amserver/UI/Login?realm=TCPIP.COM.Production) will it prompt me for the "You have already logged in. Do you want to log out and then login to a different organization?" This is the default behavior I desire, but I can't trust someone to append the realm to a login.The same user prints out because that is user from the original SSO Token.
Somehow your policy is allowing an authenticated user in one realm to access resources in another. I would check the policy (or possibly the policy agent config files) for the development realm, just a guess but it sounds like a of copy paste problem of some kind?
What if you put a realm=production condition in your access policy for the production realm resources? -
ROWS BETWEEN 12 PRECEDING AND CURRENT ROW
I have a report with the last 12 months and for each month, I have to show the sales sum of the last 12 months. For example, for 01/2001, I have to show the sales sum from 01/2000 to 12/2000.
I have tried this:
SUM(sales)
OVER (PARTITION BY product, channel
ORDER BY month ASC
ROWS BETWEEN 12 PRECEDING AND CURRENT ROW)
The problem is: this calculation only considers the data that are in the report.
For example, if my report shows the data from jan/2001 to dec/2001, then for the first month the calculation result only returns the result of jan/2001, for feb/2001, the result is feb/2001 + jan/2001.
How can I include the data of the last year in my calculation???Hi,
I couldn't solve my problem using Discoverer Plus functions yet...
I used this function to return the amount sold last year:
SUM("Amount Sold SUM 1") OVER(PARTITION BY Products.Prod Name ORDER BY TO_DATE(Times."Calendar Year",'YYYY') RANGE BETWEEN INTERVAL '1' YEAR PRECEDING AND INTERVAL '1' YEAR PRECEDING )
The result was: it worked perfectly well when I had no condition; so it showed three months (1998, 1999, 2000) and two data points (Amount Sold, Amount Sold Last Year). The "Amount Sold Last Year" was null for 1998, as there aren't data for 1997.
Then I created a condition to filter the year (Times."Calendar Year" = 1999), because I must show only one year in my report. Then I got the "Amount Sold" with the correct result and the "Amount Sold Last Year" with null values. As I do have data for 1998, the result didn't return the result I expected.
Am I doing something wrong?? -
Hi,
I'm trying to do something which I would guess is quite a common query, but after scratching my head and perusing the web I am still no closer to a solution.
I am running:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
I'm looking to sum up a set of values, taking into account both a parent grouping and start and end dates.
For the parent grouping I am using:
+SUM([value]) over (Partition by [Parent] order by [Parent],[Child])+
And I was hoping to be able to extend this SUM to also handle the start and end dates, so the final output would contain a sum of the values for each different time period.
As an example, using the data below I'm trying to sum up the price of the components of a car over time:
row, product, component, rate, start date, end date
1, car, chassis, 180, 01/01/2000, 31/12/2009
2, car, chassis, 200, 01/01/2010, 01/01/2050
3, car, engine, 100, 01/01/2000, 01/01/2050
Notice there is a change of price for Component 'chassis', so the output I'm looking for is:
row, product, component, rate, start date, end date, sum
1, car, chassis, 180, 01/01/2000, 31/12/2009, 280
2, car, engine, 100, 01/01/2000, 31/12/2009, 280
3, car, chassis, 200, 01/01/2010, 01/01/2050, 300
4, car, engine, 100, 01/01/2010, 01/01/2050, 300
But in reality all I need is:
row, product, start date, end date, sum
1, car, 01/01/2000, 31/12/2009, 280
2, car, 01/01/2010, 01/01/2050, 300
Preferably the query would be in a view rather than a stored procedure, and it needs to be able to handle many 'products', 'components' and start/end dates.
All help most appreciated, and if any more info is required, please let me know.
Thanks,
JulianHi Frank,
Thanks for picking up this query, I'll try to explain my points in more detail:
+SUM([value]) over (Partition by [Parent] order by [Parent],[Child])+I don't see columns called value, parent or child in the sample data below.
Is value the same as rate? What are parent and child? In the example:
Product is the parent
Component is the child
Rate is the value
Whenever you have a problem, post CREATE TABLE and INSERT statments for your sample data.CREATE TABLE "REPOSITORY"."PRODUCT_RATES"
( "PRODUCT" VARCHAR2(255 BYTE),
"COMPONENT" VARCHAR2(255 BYTE),
"RATE" NUMBER(9,2),
"START_DATE" DATE,
"END_DATE" DATE
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "SHOP_AREA" ;
insert into REPOSITORY.PRODUCT_RATES (PRODUCT, COMPONENT, RATE, START_DATE, END_DATE) values ('car', 'chassis', 180, to_date('01-01-2000','dd-mm-yyyy'), to_date('31-12-2009','dd-mm-yyyy'))
insert into REPOSITORY.PRODUCT_RATES (PRODUCT, COMPONENT, RATE, START_DATE, END_DATE) values ('car', 'chassis', 200, to_date('01-01-2010','dd-mm-yyyy'), to_date('01-01-2050','dd-mm-yyyy'))
insert into REPOSITORY.PRODUCT_RATES (PRODUCT, COMPONENT, RATE, START_DATE, END_DATE) values ('car', 'engine', 100, to_date('01-01-2000','dd-mm-yyyy'), to_date('01-01-2050','dd-mm-yyyy'))
Although the above short scenario highlights my issue, to expand on the example data set:
insert into REPOSITORY.PRODUCT_RATES (PRODUCT, COMPONENT, RATE, START_DATE, END_DATE) values ('family', 'wife', 500, to_date('01-01-2000','dd-mm-yyyy'), to_date('31-12-2001','dd-mm-yyyy'))
insert into REPOSITORY.PRODUCT_RATES (PRODUCT, COMPONENT, RATE, START_DATE, END_DATE) values ('family', 'wife', 999, to_date('01-01-2002','dd-mm-yyyy'), to_date('01-01-2050','dd-mm-yyyy'))
insert into REPOSITORY.PRODUCT_RATES (PRODUCT, COMPONENT, RATE, START_DATE, END_DATE) values ('family', 'baby', 250, to_date('01-01-2000','dd-mm-yyyy'), to_date('31-12-2004','dd-mm-yyyy'))
insert into REPOSITORY.PRODUCT_RATES (PRODUCT, COMPONENT, RATE, START_DATE, END_DATE) values ('family', 'baby', 500, to_date('01-01-2005','dd-mm-yyyy'), to_date('01-01-2050','dd-mm-yyyy'))
Notice there is a change of price for Component 'chassis', so the output I'm looking for is:
row, product, component, rate, start date, end date, sum
1, car, chassis, 180, 01/01/2000, 31/12/2009, 280
2, car, engine, 100, 01/01/2000, 31/12/2009, 280
3, car, chassis, 200, 01/01/2010, 01/01/2050, 300
4, car, engine, 100, 01/01/2010, 01/01/2050, 300Explain how you get 4 rows of output when the table contains only 3 rows. Are you saying that, because some row has end_date=31/12/2009, then any other row that includes that date has to be split into two, with one row ending on 31/12/2009 and the other one beginning on the next day?
Explain, step by step, how you get the values in the desired output, especially the last column.
But in reality all I need is:Sorry, I can;'t understand what you want.
Are you saying that the output above sould be acceptable, but the output below would be even better?
row, product, start date, end date, sum
1, car, 01/01/2000, 31/12/2009, 280
2, car, 01/01/2010, 01/01/2050, 300
Preferably the query would be in a view rather than a stored procedure, and it needs to be able to handle many 'products', 'components' and start/end dates.Include a couple of differtent products in your sample data and results.
I'm not sure what you want, but there's nothing in what you've said so far that makes me think a stored procedure would be needed.The only output I actually require is:
row, product, component, rate, start date, end date, sum
1, car, 01/01/2000, 31/12/2009, 280
2, car, 01/01/2010, 01/01/2050, 300and with the extended data set:
3, family, 750, 01/01/2000, 31/12/2001
4, family, 1249, 01/01/2002, 31/12/2004
5, family, 1499, 01/01/2005, 31/12/2050however, I was thinking that the data set would need to be somehow expanded to get to the above end result, hence why I included the 'middle step' of:
row, product, component, rate, start date, end date, sum
1, car, chassis, 180, 01/01/2000, 31/12/2009, 280
2, car, engine, 100, 01/01/2000, 31/12/2009, 280
3, car, chassis, 200, 01/01/2010, 01/01/2050, 300
4, car, engine, 100, 01/01/2010, 01/01/2050, 300however, this may be irrelevent.
By the way, there's no point in using the same expression in both the PARTITON BY and ORDER BY clauses of the same analytic function call. For example, if you "PARTITION BY parent", then, when "ORDER BY parent, child" is evaluated, rows will only be compared to other rows with the same parent, so they'll all tie for first place in "ORDER BY parent". OK, thanks.
So far I have got to:
select
sum(rate) over (partition by product) as sum,
a.*
from product_rates a
which results in:
SUM PRODUCT COMPONENT RATE START_DATE END_DATE
480 car engine 100 2000-01-01 00:00:00.0 2050-01-01 00:00:00.0
480 car chassis 200 2010-01-01 00:00:00.0 2050-01-01 00:00:00.0
480 car chassis 180 2000-01-01 00:00:00.0 2009-12-31 00:00:00.0
2249 family baby 250 2000-01-01 00:00:00.0 2004-12-31 00:00:00.0
2249 family wife 999 2002-01-01 00:00:00.0 2050-01-01 00:00:00.0
2249 family baby 500 2005-01-01 00:00:00.0 2050-01-01 00:00:00.0
2249 family wife 500 2000-01-01 00:00:00.0 2001-12-31 00:00:00.0
but this shows that all price variations for a component over time are being summed (e.g. car enging 100 + car chassis 200 + car chassis 180 = 480)
Hope that goes someway expaling my query better.
Also, quick query to improve my postings - how do i indent without making text ittallic, and how do you make code a different font?
Thanks again.
Julian
Maybe you are looking for
-
Error on deploying war to Tomcat
Hi, I am a new bie trying to develop a Web application. I have deployed a war to tomcat 5.0.25, when I try to access the ContextRoot (there is a default.jsp file in the war, that should be invoked on eneterinh the COntextRoot in the browser address b
-
Need help: W520 connecting 2 external monitors
Hi, I just bought two new monitors and trying to connect both of them with W520. Both monitors are 1920*1080 and have VGA, DVI, and DP. (No HDMI) I know one solution that probably going to work is to connect one with displayport and the other with V
-
I am running 2 JVMs on a box as a dedicated server - machine 1. And running the client on another box - machine 2, loading data (about 600,000 objects) to the cache server. I got a lot of the following warnings. Is it normal? And what do they mean? 2
-
I can't believe i have to ask this lol
I have a G5 server i am attempting to load. It has been sitting in a rack for about a year never used. On the back of the server you have ethernet and a serial port. I loaded up server admin tools on a G5 workstation and attempted to "log in" to the
-
How to migrate OWB mappings in ODI
Dear All, I would require your valuable inputs for following points. 1. How do we do the deployment on multiple sites in ODI? what is the methdology or steps? R there any third party tools to do the same? what are they? 2. Is there any scripting lang