Creation of VMs question
Hi All
I have worked with VM's before but i just want some general assistance/advise/idea , im currently wanting to do an experiment in regards to server 2008 R2, heres what i want to do. I want to run Forefront, DC and Exchange via one machine. my question is
this.
Do i make a Server that just does HyperV and have a vm with dc another with Forefront another exchange, or do i make a server with HyperV that also does the Forefront and DC but make just a hyper v for exchange?
Forefront is not a must as i can ditch that idea for one of my Linux based FW.
My idea and understanding would be. 1 Physical server that just does HyperV and to Virtual servers that does AD/DC and another for exchange and if i have Forefront on that it would be based on the Physical side for simplicity reason.
Most my work has been either single based OS testing not multi server os testing in regards to VM's
Lee Snr Systems Adminstrator
Installing the Hyper-V Role and a DC on the same OS should be possible. But it is no good idea for some reasons and is also not supported by Microsoft. So create a VM for the DC.
-->
http://www.altaro.com/hyper-v/reasons-not-to-make-hyper-v-a-domain-controller/
Similar Messages
-
Automatic Delivery creation through VMS
Dear Automotive wizards,
We are using the Vehicle Management system for or automotive process as a distributor and retailer.
The sales order - delivery settings are NOT for automatic delivery creation. However, in some cases, the delivery is getting created aytomatically in the background.
Have done some introspection in thes 'some' cases and found that they pertain to our 'used care business'. Therein we are using only standard document typesand no changes have been done in their settings. But in some of those cases whenever a sales order is created from the VMS the R3 is also creating a delivery. This delivery doenst show in the vehicle history hence has to be dug out by looking in the dpcument flow for the sales order. The order / delivery setting are NOT for automatic creation.
Do advise how to go about resolving this unnecessary 'speed breaker' in the IS AUTO highway!
Thanks and regards,
TariqThanks for the pointers but I have explored that earlier and the info therein shows that the delivery was created by the same ID and at the same time when the order was created.
In fact the user is oblivious of the situation too since he/she is working in the VMS and from there they are supposed to move on to create the order after the delivery. But this delivery gets created by itself in the R3, and you would recall that the R3 docs dont get reflect and hence do not update the VMS vehicle status. thereby the vehile status is still as "sales order created" in VMS but the R3 has as "delivery created" there being a mismatch the VMS stops further processing.
By the above you woujld have also got to know that we dont have batch processing or anything like that, pure simple and manual creation of dlivery through the VMS.
The strangest thing is that it happens for only some of the times, and for only some of the users. I have tried comparing he roles for access but all have the same access. Would there be any settings for storage location specific in R3 for delivery creation? or likewise?
Thanks and regards, -
Hi,
At the steel company I work at, We are starting to look at implementing a new ERP (i-Renaissance). We are seriously considering having the DB on oracle. We are thinking on having it on Windows or perhaps Unix. Some other people here are considering having it on VMS.
I am asking the broad question of the for and againsts of oracle on VMS. I am asking the question with 3 aspects in mind.
- Reliability of the DB on VMS
- Cost of the VMS implementation vs. windows. (license, OS and hardware)
- Support for potential issues (both from oracle and from community such as these forums)
What about RDB. Should we consider it?
Please keep in mind that we are at the beginning of this decision process and that I am new to oracle. I need to make an informed decision on this.
Best wishes to all!Well thank you so far for the replies.
Currently where I work we have a bunch of windows servers for most of everything we need. Our aging ERP system is VMS/ALPHA based (the alpha is 12 years old I think). We will have the ERP company come and migrate us to the new system.
Some people here think (since that is what they know), that the DB and application server for the ERP should all be on VMS/ALPHA. I do not reject VMS outright, but I too have doubts about VMS's future although I KNOW it is a solid OS. I also wonder how long will Oracle develop the DB on VMS.
So definately we will consult with the ERP company for advice on those matters.
So my question boils down to: if running the DB and aplication on windows is a viable option for the ERP should I say no to VMS based on the cloudy future of the OS???
Also will the VMS server performance be similar to a windows server(s) solution provided we get a new alpha???
What about cost of the 2 solutions?
What about support in case of issues with the DB? Should I be worried about this on a VMS based DB?
Thanks so far for the replies, It is helping my thought process.
Later! -
Dynamic View Object Creation and Scope Question
I'm trying to come up with a way to load up an SQL statement into a view object, execute it, process the results, then keep looping, populating the view object with a new statement, etc. I also need to handle any bad SQL statement and just keep going. I'm running into a problem that is split between the way java scopes objects and the available methods of a view object. Here's some psuedo code:
while (more queries)
ViewObject myView = am.createViewObjectFromQueryStmt("myView",query); //fails if previous query was bad
myView.executeQuery();
Row myRow = myView.first();
int rc = myView.getRowCount();
int x = 1;
myView.first();
outStr = "";
int cc = 0;
while (x <= rc) //get query output
Object[] result = myRow.getAttributeValues();
while (cc < result.length)
outStr = outStr+result[cc].toString();
cc = cc+1;
x = x+1;
myView.remove();
catch (Exception sql)
sql.printStackTrace(); myView.remove(); //won't compile, out of scope
finally
myView.remove(); //won't compile, out of scope
//do something with query output
Basically, if the queries are all perfect, everything works fine, but if a query fails, I can't execute a myView.remove in an exception handler. Nor can I clean it up in a finally block. The only other way I can think of to handle this would be to re-use the same view object and just change the SQL being passed to it, but there's no methods to set the SQL directly on the view object, only at creation time as a method call from the application module.
Can anyone offer any suggestions as to how to deal with this?I figured this out. You can pass a null name to the createViewObjectFromQueryStmt method, which apparently creates a unqiue name for you. I got around my variable scoping issue by re-thinking my loop logic.
-
10.7 client .dmg creation for deployment questions.
Please forgive me if this question is in the wrong forum.
I've been doing searches online and in the 10.7 Peachpit books (client and server) and I can't seem to find the info I am looking for.
I am trying to create a 10.7 .dmg to use on new Macs my company is going to deploy. We are not using 10.7 Server at the moment, we
are using 10.6.8 Server. This will not be an image we are going to deploy over the network either. I know this may not be "best practices"
but at the moment, this is the way we are going to (re)image new Macs.
Basically, I want to create a 10.7 .dmg that does NOT contain the recovery partition. I can't seem to find a way to do this. If I am correct,
even a "clean" install, when booted from a USB 10.7 recovery drive, will create the recovery partition, right?
I am running 10.7 client and i have the 10.7.3 Server Admin tools.
I apologize in advance if I am missing something glaringly obvious.
Also, any tips on best practices for creating 10.7 client .dmgs for deployment that's any different than creating 10.6 images?
thanks in advance.Using information from this site and my own scripting experience I present to you a more secure way to do it which supports munki and other deployment tools without having the password to the ODM or client in clear text on the client or on packages easeliy accessable on a http server:
On server:
ssh-keygen
Save the output of ~/.ssh/id_rsa.pub to your clip board
Then create a launchd or something so that this runs at startup
nc -kl 1337 | xargs -n 1 -I host ssh -q -o StrictHostKeyChecking=no root@host /usr/local/bin/setupLDAP diradminpassword localadminpassword > /dev/null 2>&1
On client:
Create script (to use in a package as postinstall or something):
#!/bin/bash
# Turns on ssh
systemsetup -f -setremotelogin On
# Sets up passwordless login to root account from server
echo "ssh-rsa FROM_YOUR_CLIPBOARD_A_VERYLONGOUTPUTOFCHARACTERS [email protected]" >> /var/root/.ssh/authorized_keys
# installs setupLDAP
mkdir -p /usr/local/bin
cat > /usr/local/bin/setupLDAP <<'EOF'
#!/bin/sh
PATH=/bin:/sbin:/usr/bin:/usr/sbin
export PATH
computerid=`scutil --get ComputerName`; yes | dsconfigldap -vfs -a 'server.domain.no' -n 'server' -c $computerid -u 'diradmin' -p $1 -l 'l' -q $2
EOF
chmod +x /usr/local/bin/setupLDAP
End note
That was the code, now you just add the skeleton And to clearify what this does, first we let the server connect to the client as root even though root access is "disabled" (he has no password and therefore you can't log in as root as default). Then we create a small script to setup OD binding (/usr/local/bin/setupLDAP) but this script doesn't contain the passwords. Then the client send a request to the small socket server on the server with it's hostname, then the server connects to that hostname and executes /usr/local/bin/setupLDAP with the needed passwords. -
Bios Disk Creation & Hard Drive Question A22p
I am trying to create a disk to flash the bios on a A22p thinkpad. I have followed the directions on the download page but can't get beyond the DOS screen that contains the license info so nothing ever gets copied to my floppy (dispite using a command line per the instructions). I am very frustrated - any help on how to make a disk would be appreciated!!
Second question, I am trying to upgrade the hard drive to 160gb. Since the install of the OS stalls I am assuming it has something to do with the drive size, hence my reason for updating the bios. Does anyone know what the max size is that can be installed in an A22p?Welcome to the forum!
There is an option of non-diskette BIOS upgrade, and that's the route I normally take. Save it to your desktop in Windows and run it from there.
http://www-307.ibm.com/pc/support/site.wss/document.do?sitestyle=lenovo&lndocid=MIGR-4Q2KM3
I don't have a positive answer regarding the HDD size, but will tell you that I've had a 100Gb drive in an A22p. Some older machines have a BIOS limit at somewhere around 137Gb if I'm not mistaken. Whether A22p falls into this group or not, I'm not 100% sure. Even if it does, I believe that it would simply fail to realize the "excess" size, so I believe that your issue may be someplace else.
Hope this helps.
Cheers,
George
In daily use: R60F, R500F, T61, T410
Collecting dust: T60
Enjoying retirement: A31p, T42p,
Non-ThinkPads: Panasonic CF-31 & CF-52, HP 8760W
Starting Thursday, 08/14/2014 I'll be away from the forums until further notice. Please do NOT send private messages since I won't be able to read them. Thank you. -
Dynamic Table Creation using RTTS - Question on Issue
I am using the RTTS services to dynamically create a table. However, when I attempt to use the get_components method it does not return all the components for all tables that I am working with.
Cases and End Results
1) Created structure in data dictionary get_components works.
2) Pass PA0001, P0001 or T001P to get_components and I do not receive the correct results. It only returns a small subset of these objects. The components table has all the entries; however, the get_components method only returns about 4 rows.
Can you explain this and point me to the correct logic sequence? I would like the logic to work with all tables.
Code excerpt below:
The logic for the get components works for case 1, but not case 2. When I pass into this method a "Z" custom table or structure name for parameter structure_name and the get_components() method executes and returns the correct results. However, when I pass in any of the objects listed above in case 2, the get_components method does not return the correct number of entries. When I check typ_struct which is populated right before the get_components line, I see the correct number of components in the components table.
method pb_add_column_headings .
constants: c_generic_struct type c length 19 value 'ZXX_COLUMN_HEADINGS'.
data: cnt_lines type sytabix,
rf_data type ref to data,
typ_field type ref to cl_abap_datadescr,
typ_struct type ref to cl_abap_structdescr,
typ_table type ref to cl_abap_tabledescr.
data: wa_col_headings type ddobjname.
data: rf_tbl_creation_exception type ref to
cx_sy_table_creation,
rf_struct_creation_exception type ref to
cx_sy_struct_creation.
data: t_comp_tab type
cl_abap_structdescr=>component_table,
t_comp_tab_new like t_comp_tab,
t_comp_tab_imported like t_comp_tab.
field-symbols: <fs_comp> like line of t_comp_tab,
<fs_col_headings_tbl> type any table.
**Get components of generic structure
wa_col_headings = c_generic_struct.
typ_struct ?= cl_abap_typedescr=>describe_by_name(
wa_col_headings ).
T_COMP_TAB = typ_struct->get_components( ).
**Determine how many components in imported structure.
typ_struct ?= cl_abap_typedescr=>describe_by_name(
structure_name ).
t_comp_tab_imported = typ_struct->get_components( ).
cnt_lines = lines( t_comp_tab_imported[] ).Hi Garton,
1. GET_COMPONENT_LIST
Use this FM.
It gives all fieldnames.
2. Use this code (just copy paste)
REPORT abc.
DATA : cmp LIKE TABLE OF rstrucinfo WITH HEADER LINE.
DATA : pa0001 LIKE TABLE OF pa0001 WITH HEADER LINE.
CALL FUNCTION 'GET_COMPONENT_LIST'
EXPORTING
program = sy-repid
fieldname = 'PA0001'
TABLES
components = cmp.
LOOP AT cmp.
WRITE :/ cmp-compname.
ENDLOOP.
regards,
amit m. -
SDK: set default value to "managed by inventory" option when item creation.
Currently, my customer wants to set default value (as true) to "managed by inventory" option under inventory tab when item creation.
My question.
1. how to detect item master form in add/find.
2. how to set option default to be true.
Regards,
Kenneth TseHi Kenneth
Okay, you need to do the following.
1) You need to see when the item menu opens and what mode it is in and then do the event. So in itemevent uou'll have code similar to
If (FormType = "AddMov" And pval.FormMode="Your Mode") then
***your code
end if
You can also just see the form and chesh the caption of the button.
2) to do the second part,
Dim oitem As SAPbobsCOM.Items
oitem = oCompany.GetBusinessObject(SAPbobsCOM.BoObjectTypes.oItems)
If oitem.GetByKey("Item Code") Then
oitem.GLMethod = SAPbobsCOM.BoGLMethods.glm_WH
End If
Hope this helps -
I have Windows XP and Photo Creations is awesome. I'm getting a new computer and will have Windows 7 . Will I be able to download Photo Creations?
This question was solved.
View Solution.Thanks, JRhonda!
We're glad you like HP Photo Creations. When you move to your new computer, you can bring your HP Photo Creations projects along. Just copy the entire project folder to the new machine. Then if you want to reprint or edit a project, just click the Open button on the home screen and select the project folder you want.
Thanks for using HP Photo Creations,
RocketLife
RocketLife, developer of HP Photo Creations
» Visit the HP Photo Creations Facebook page — news, tips, and inspiration
» See the HP Photo Creations video tours — cool tips in under 2 minutes
» Contact Customer Support — get answers from the experts -
How to 'Export' a Template cloned from a VM in Oracle VM 3.1.1
I've used the OVM Manager to clone a stopped VM into a Template. I can see the Template in the Manager GUI and use it to create new VMs on the same or other servers within the same OVM Manager space.
Now, I'd like to take this newly created Template and import it into other OVM Managers at another location for creation of VMs on servers there.
Is there a way 'export' the template files (System.img and vm.cfg, etc.) for copying an import into other OVM Manager spaces?
Thanks.The previous post by 911182 was helpful - thanks, but not quite accurate (at least not in my case).
The post led me directly into the Repository where I found only the vm.cfg file under the Templates directory
/OVS/Repositories/<Repos_UUID>/Templates/<template_UUID>/vm.cfg
Within the vm.cfg file however, I found that the location of the disk file for the template to be at:
/OVS/Repositories/<Repos_UUID>/VirtualDisks/<disk_UUID>.img
So copying just the Template directory will not get all files for the template, you'd need to copy the disk files
from the VirtualDisk directory as well.
Is going into the Repository and finding all the files for the Template the only way to achieve this?
Is there really no export feature in the Oracle VM Manager to gather up all the files of the template and tgz them
up for you? -
EBP - Backend Reservation - Availability Check
Hello
We have a requirement for doing classic scenario with reservation if for inventory items. The system (SRM 4.0) is configured to determine whether to create a reservation (if stock available) or external procurement in the backend R/3. The standard function checks for sotck availability at the time of shopping cart creation. The question is if the system can check the stock availability right after the final approval. Is there a BADi I can utilize?
Another question is regarding to services procurement. The standard EBP supports buying services by quantity (based on UOM) and with value. Can the standard EBP procure services by value($) only? For example, if the buyer has a fixed amount budget to procure services, he/she does not want to specify days/months for the services but the budget amount. How would the system support this?
Also, the same question on the receiving/confirming end (receiving by value).
Your comment is much appreciated.Hi Christophe,
This sounds exactly as what we want. The only catch is we don't have a sandbox in Classic scenario to play with it yet. As soon as we have our box up, we will do this.
Another 2 follow-on questions on the same token.
1. During the SC creation, requester can check the availability in the "Availability" tab. Assume there are multiple plants have sufficient stock for this item. Does the requester have to select a plant before submitting the SC?
2. Assuming Plant 1001 is selected at SC creation, however stock is not available/sufficient after the final approval while other plants (say Plant 1002) in the same company code have sufficient stock). Would the system switch to Plant 102 for reservation or create an external procurement document (PR/PO) with standard SRM(4.0)?
Thanks,
Joseph -
Oracle 11g Installation How to select Database Character Set
Hi,
I am Installing oracle 11g R2. After installation I had verified the character set it was AL16UTF16 but I wants to set AL32UTF8 charater set at the time of oracle installation only. I can't see the character set setting opetion at the time of installation because I am selecting the installation option " Install database softerware only" and after installation of oracle software manually I am creating the database. Please help me how can I set the character set at the time of oracle installation or at the time of database creation.
My question is-
How can I set the AL32UTF8 character set in above scenario?
Why It is showing AL16UTF16 character set even I did not define any thing?But is there any choice to set the NLS_CHARACTERSET at the time of manually database creation. Actually For creating the database i am using one shell script for seting the parameter values in init.ora file so can I set the parameter at that level(at the time of creating init.ora using manually database creation). like
## Original init.ora file created by manual database creation tool on ${DATE}.
*.aq_tm_processes=0
*.background_dump_dest='$ORACLE_BASE/admin/$ORACLE_SID/bdump'
*.compatible='10.2.0'
*.control_files='/$db_file_loc/oradata/$ORACLE_SID/control01.ctl','/$db_file_loc/oradata/$ORACLE_SID/control02.ctl','/$db_file_loc/oradata/$ORACLE_SID/control03.ctl'
*.core_dump_dest='$ORACLE_BASE/admin/$ORACLE_SID/cdump'
*.db_block_size=8192
*.db_cache_size=104857600
*.db_domain='$server_name'
*.db_file_multiblock_read_count=8
*.db_name='$ORACLE_SID'
*.fast_start_mttr_target=300
*.instance_name='$ORACLE_SID'
*.java_pool_size=16777216
*.job_queue_processes=4
*.large_pool_size=16777216
*.log_archive_dest='/u05/oradata/$ORACLE_SID'
*.log_archive_format='$ORACLE_SID_%s_%t_%r.arc'
*.olap_page_pool_size=4194304
*.open_cursors=500
*.optimizer_index_cost_adj=50
*.pga_aggregate_target=536870912
*.processes=1500
*.query_rewrite_enabled='TRUE'
*.remote_login_passwordfile='EXCLUSIVE'
*.shared_pool_size=83886080
*.sort_area_size=1048576
*.sga_max_size=1048576000
*.sga_target=536870912
*.star_transformation_enabled='TRUE'
*.timed_statistics=TRUE
*.undo_management='AUTO'
*.undo_retention=10800
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='$ORACLE_BASE/admin/$ORACLE_SID/udump'
*.utl_file_dir='/export/home/oracle/utlfiles'
**.nls_characterset='AL32UTF8'*
EOF
Is it correct? -
Best Practice for vDS, Uplinks and TF ?
Hi,
Three questions about vNetwork Distributed Switch:
My environment:
- Datacenters: 2
- Hosts: 50 (25 in each datacenter)
- Cluster: 10 (5 in each datacenter) (2 to 5 nodes per cluster)
- Nics Hosts:
- 1 nic for management
- 1 nic for redundant management and VMotion
- 2, 4 or 8 nics with trunk internal networks (15 VLANs)
- 2, 4 or 8 nics with trunk to external networks (10 VLANs)
- 1 nic for backup
- Over 1500 vms
Question 1 - Speaking only of internal, external and back up, how I should have vDS?
- 6 vDS (2 internal, 2 external and 2 backup, half in Datacenter), right?
Question 2 - I have clusters of different sizes, some clusters have two hosts nic for internal networks while others cluster hosts have 4 or 8 nics nics, so I must have DvUplinks how many in each vDS? What is the impact of having uplinks without nic attached in some clusters?
- VDs_Internal = 8 uplinks?
Question 3 - What is good practice to use Teaming and Failover in this case? "Routed originating based on virtual port" or "Routed bases on physical NIC load" knowing that I'm not using Network Control IO?
thanks,
ReisQ1. With VLAN you can also use a single vDS with several portgroup (and for some, like FT, vMotion, ... use explicit uplink order).
Q2. This could be a little problem. With vDS you can map uplink to pNIC... but DRS (for example) could not know if your host has less uplinks. If the difference is minimal go with a single cluster, otherwise consider to use 2 clusters.
Q3. The virtual port solution could be simple and good in most cases.
Andre -
Hi SAP Helpers,
I have a doubt in APP. I have configured all settings for app and have maitained all parameters in F110. I am very comfortable till payment proposal creation. here my question is after payment proposal
1.how to maintain shedule print(what r the req fields to be maintain here),
2.how to take print checks and
3.which type of printer can use in real time
4.and also i heard somewhere, for single checks use one type of printer and for long checks(have 100 checks at a time) use another printer. is it correct?
please give me your valuable advices for printing checks
sure will assign points and adv thanks to all
VKhi vk,
u maintained all the paramater of F110 then u maintain Run date and Runschedule
Run date means when u execute the programme.
Run schedule means what is the period ie from to date ex if u want to run for the month of Nov 07 then u maintain run schedule 01.11.2007 to 30.11.2007. And also u maintain next run date ie if u want to run for a month period then next run schedule will be if today run date is 01.12.2007 then next run date will be 01.01.2008 . then execute the programme and the print out is mostly Laser print only APP means automatic Payment it did,t take single check it take more checks.
After executing the programme u go and check the log then u know the details of the vendors and amount also.
After that go back and go to menu bar systems under this owns pool request and u find the details of check printing.
Once do this and any quary revert to me.
Subbu -
Materialized views on prebuilt tables - query rewrite
Hi Everyone,
I am currently counting on implementing the query rewrite functionality via materialized views to leverage existing aggregated tables.
Goal*: to use aggregate-awareness for our queries
How*: by creating views on existing aggregates loaded via ETL (+CREATE MATERIALIZED VIEW xxx on ON PREBUILT TABLE ENABLE QUERY REWRITE+)
Advantage*: leverage oracle functionalities + render logical model simpler (no aggregates)
Disadvantage*: existing ETL's need to be written as SQL in view creation statement --> aggregation rule exists twice (once on db, once in ETL)
Issue*: Certain ETL's are quite complex via lookups, functions, ... --> might create overy complex SQLs in view creation statements
My question: is there a way around the issue described? (I'm assuming the SQL in the view creation is necessary for oracle to know when an aggregate can be used)
Best practices & shared experiences are welcome as well of course
Kind regards,
Peterstreefpo wrote:
I'm still in the process of testing, but the drops should not be necessary.
Remember: The materialized view is nothing but a definition - the table itself continues to exist as before.
So as long as the definition doesn't change (added column, changed calculation, ...), the materialized view doesn't need to be re-created. (as the data is not maintained by Oracle)Thanks for reminding me but if you find a documented approach I will be waiting because this was the basis of my argument from the beginning.
SQL> select * from v$version ;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL> desc employees
Name Null? Type
EMPLOYEE_ID NOT NULL NUMBER(6)
FIRST_NAME VARCHAR2(20)
LAST_NAME NOT NULL VARCHAR2(25)
EMAIL NOT NULL VARCHAR2(25)
PHONE_NUMBER VARCHAR2(20)
HIRE_DATE NOT NULL DATE
JOB_ID NOT NULL VARCHAR2(10)
SALARY NUMBER(8,2)
COMMISSION_PCT NUMBER(2,2)
MANAGER_ID NUMBER(6)
DEPARTMENT_ID NUMBER(4)
SQL> select count(*) from employees ;
COUNT(*)
107
SQL> create table mv_table nologging as select department_id, sum(salary) as totalsal from employees group by department_id ;
Table created.
SQL> desc mv_table
Name Null? Type
DEPARTMENT_ID NUMBER(4)
TOTALSAL NUMBER
SQL> select count(*) from mv_table ;
COUNT(*)
12
SQL> create materialized view mv_table on prebuilt table with reduced precision enable query rewrite as select department_id, sum(salary) as totalsal from employees group by department_id ;
Materialized view created.
SQL> select count(*) from mv_table ;
COUNT(*)
12
SQL> select object_name, object_type from user_objects where object_name = 'MV_TABLE' ;
OBJECT_NAME OBJECT_TYPE
MV_TABLE TABLE
MV_TABLE MATERIALIZED VIEW
SQL> insert into mv_table values (999, 100) ;
insert into mv_table values (999, 100)
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view
SQL> update mv_table set totalsal = totalsal * 1.1 where department_id = 10 ;
update mv_table set totalsal = totalsal * 1.1 where department_id = 10
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view
SQL> delete from mv_table where totalsal <= 10000 ;
delete from mv_table where totalsal <= 10000
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view While investigating for this thread I actually made my own question redundant as the answer became gradually clear:
When using complex ETL's, I just need to make sure the complexity is located in the ETL loading the detailed table, not the aggregate
I'll try to clarify through an example:
- A detailed Table DET_SALES exists with Sales per Day, Store & Product
- An aggregated table AGG_SALES_MM exists with Sales, SalesStore per Month, Store & Product
- An ETL exists to load AGG_SALES_MM where Sales = SUM(Sales) & SalesStore = (SUM(Sales) Across Store)
--> i.e. the SalesStore measure will be derived out of a lookup
- A (Prebuilt) Materialized View will exist with the same column definitions as the ETL
--> to allow query-rewrite to know when to access the table
My concern was how to include the SalesStore in the materialized view definition (--> complex SQL!)
--> I should actually include SalesStore in the DET_SALES table, thus:
- including the 'Across Store' function in the detailed ETL
- rendering my Aggregation ETL into a simple GROUP BY
- rendering my materialized view definition into a simple GROUP BY as wellNot sure how close your example is to your actual problem. Also don't know if you are doing an incremental/complete data load and the data volume.
But the "SalesStore = (SUM(Sales) Across Store)" can be derived from the aggregated MV using analytical function. One can just create a normal view on top of MV for querying. It is hard to believe that aggregating in detail table during ETL load is the best approach but what do I know?
Maybe you are looking for
-
I am trying to attach two excel sheets from the application server as a part of the mail. I have tried doing it with the function module SO_NEW_DOCUMENT_ATT_SEND_API1 But i am ending with only one attachement, that too thru list. Can any body please
-
Does the last flash player version works correctly for Adobe portofolio file?
I have a problem with some Adobe Acrobat portofolio files, they were created some months ago with Acrobat X pro version but right now can not be opened. The first message that appears is: "To view the Flash technology content in this PDF file, please
-
Hi all, how do I fix the App Store cannot verify a secure connection with the App Store. in mountain lion tried safari reset, changing the MTU nothing works at this time. I am using a IMac ML 10.8.2 Would like some help please. Gee
-
Generate report for invoices where basic price is missing
Dear Experts, Requirement is like to gererate the report or SAP Query for those invoices (specially like Plants abrod WIA) where VKP0 (basic price ) sales price is missing. with respect to date range . How can i achieve this ??? Regards
-
I have none ticked in Lightroom. But when images come in, I have 50+ on the brightness. Why, is this? Alex