How to generate an impulse to test the short circuit in an inductor
Hello,
Im new to labview and i'm in need to perform SURG - SURGE STRESS TEST
The purpose of this test is to detect an inter-turn short by applying a number of high
voltage impulses (or surges) to the selected winding.
Each impulse should produce a sinusoidal transient which eventually decays to zero.
How to generate impulse using labview.
Solved!
Go to Solution.
Hi Swathi,
Please see the function "impulse pattern.vi" in signal processing --> Signal generation pallettes.
Else you can browse through the examples in LabVIEW.
Regards,
Srikrishna.J
Similar Messages
-
How to generate protocol independent content by the servlet?
Hi All,
How to generate protocol independent content by the servlet?
Please give some tips..Dear All,
Can anybody explain it with code..Protocol Independent ..
Not Http Protocol..You can take any of other protocols ...
Whether I have to use GenericServlet ? If so, then How?Please explain with code?
Thanks.. -
Hi,
I am trying to test a GPS receiver and would like to use the PXI-5671 to stream a GPS binary data file which was generated using the GPS toolkit. I tried using the RFSG Arbitrary Waveform Generation.vi but it doesn't work. Is there a sample code that will allow me to do this?
Thank you.Hello,
The NI-RFSG driver certainly allows you to stream waveform files to signal generators without the use of the GPS Toolkit, as shown here. However, the NI-RFSG driver does not provide the user with out-of-box applications that can stream GPS waveforms with Almanac and Ephemeris information to various signal generators. For this reason, the NI GPS toolkit will be needed.
The only alternative is that you will gather your own Almanac and Ephemeris files and implement your own functionality using the base RFSG driver to generate and stream the GPS signals. This is something that you would need to do on your own, which could prove to be difficult, thus making the GPS toolkit the best option.
I hope this information helps.
Regards
Cameron T
Applications Engineer
National Instruments -
How to generate error in Moni when the message is not processed
Scenario : Proxy - Soap ( Sync )
Requirement : If a message is failed at receiver ( may be DB is not up or server does not accept request ), then the error message is returned back to calling program but in MONI, it still shows a checkered flag.
My response contains -1, if the message cannot be posted onto Web server, for any reason.
Now, can I put a red flag or a error message in Moni, everytime the response code is -1 and the error message still reach ERP?
I can register Alert that sends email to the user, as per Michal's blog but how do I generate error message in MONI that triggers it?
Regards,
Venkathi,
>>then the error message is returned back to calling program but in MONI, it still shows a checkered flag.
I'm not sure if you configured alerts correctly as the error
will be visible in AE message (not in SXI_MONITOR but from RWB)
and the alert should have informed you about that
Regards,
Michal Krawczyk -
How to generate row_number for more than the no of records in the order by column?
Hi All,
I have a query which contains Order quantity, order date and Row_number as Id. The below code produces 15 records as well as row_number for those 15 records. But I need row number for the next 12 months too. That means I need 12 more records with row_number
and Order month and order quantity empty.
use [AdventureWorksDW2012]
go
SELECT
SUM([OrderQuantity]) As OrderQuantity
,CONVERT(DATE,[OrderDate]) As OrderDate
,LEFT(d.[EnglishMonthName],3) + '-' + CONVERT(CHAR(4),YEAR([OrderDate])) As OrderMonth
,ROW_NUMBER() OVER(ORDER BY([OrderDate])) As Id
FROM [dbo].[FactResellerSales] f
JOIN [dbo].[DimProduct] p ON f.ProductKey = p.ProductKey
JOIN [dbo].[DimDate] d ON f.OrderDateKey = d.DateKey
WHERE OrderDate >= '2006-04-01' AND EnglishProductName = 'Road-650 Red, 60'
GROUP BY p.[ProductKey], [OrderDate],[EnglishMonthName]
How can I achieve it?
Regards,
JulieI got a solution for this.
use [AdventureWorksDW2012]
go
declare @min int, @max int
IF OBJECT_ID('Tempdb..#tmp') IS NOT NULL
drop table #tmp
create table #tmp
Id int
,OrderQuantity int
,OrderDate date
,OrderMonth varchar(78)
insert into #tmp
Id
,OrderQuantity
,OrderDate
,OrderMonth
SELECT
ROW_NUMBER() over(order by(orderdate)) As id
,SUM([OrderQuantity]) As OrderQuantity
,CONVERT(DATE,[OrderDate]) As OrderDate
,LEFT(d.[EnglishMonthName],3) + '-' + CONVERT(CHAR(4),YEAR([OrderDate])) As OrderMonth
FROM [dbo].[FactResellerSales] f
JOIN [dbo].[DimProduct] p ON f.ProductKey = p.ProductKey
JOIN [dbo].[DimDate] d ON f.OrderDateKey = d.DateKey
WHERE OrderDate >= '2006-04-01' AND EnglishProductName = 'Road-650 Red, 60'
GROUP BY p.[ProductKey], [OrderDate],[EnglishMonthName]
--select * from #tmp
select @min=count(*)+1, @max=count(*)+12 from #tmp
--select @min, @max
IF OBJECT_ID('Tempdb..#tmp_table') IS NOT NULL
drop table #tmp_table
Create table #tmp_table
Id int
,OrderQuantity int
,OrderDate date
insert into #tmp_table
select Id, OrderQuantity, OrderDate from #tmp
--select * from #tmp_table
declare @test varchar(1000)
set @test='insert #tmp_table(id) select TOP '+ convert(varchar,@total)+' ROW_NUMBER() over(order by(select 1)) from sys.columns'
exec(@test)
while(@max >= @min)
begin
insert #tmp_table(id,OrderDate)
select @min,DATEADD(mm,1,max(orderdate))
from #tmp_table
set @min=@min+1
end
select * from #tmp_table
Regards,
Julie -
How to generate trend chart after submitting the jsp page
hi,
i have a jsp page, in that i will selec some values , after submitting that jsp with selected values , the trend chart will be displayed.
how to do this , please tell me.thank you for your reply
i took JFreeChart which is thirdparty API , but , in that i got one query.
how to set the limit for y-axis?means , i want to put maximumm of 100, but its taking what is the maximum value i added to dataset.
any one know about this, please tell me. -
How to generate a ramping voltage in the DAC which is triggered independently
Currently, we use LABVIEW to create an array that is downloaded to a pulseblaster card. It is very useful to us that the pulseblaster continues to execute this array long after the labview program has stopped. We want to use a similar method with a DAC card and a ramping voltage generation program. The tricky part is getting the triggering right.
Any suggestions would be greatly appreciated.Well, this is really more of a question about the pulseblaster card's programming functions. LabVIEW is just the programming language calling functions for a 3rd party device. Unless anyone here has experience with pulseblaster cards, you'll probably find your best help by contacting someone from SpinCore Technologies. (I think that's who makes pulseblaster, right?) I can tell you anything you want to know about any of the NI cards though.
Good Luck!
Russell
Applications Engineer
National Instruments
http://www.ni.com/support -
How to generate power point slides from web forms?
Hello,
Anybody know how to generate power point slides from the web forms. the data will in the database just like how we generate a report, just like that is there any way we can generate power point slides? if so could you please provide information please. Any examples are appreciated...
Thanks so much....Hi,
i have never tried it but in
this link you can see the it should be possible with the
powerpoint viewer.
Regards,
Markus -
How to develop host application to test my applet
Hi,
Can anybody suggest how to develop host application for testing the small java card applets like heloworld..hi Markusg,
Why not look for Direct Input Program RV14BTCI for creating, changing and deleting price conditions. This will be convinient for you in handling bulk data as well.
Regards
Saurabh Goel -
Pl/sql boolean expression short circuit behavior and the 10g optimizer
Oracle documents that a PL/SQL IF condition such as
IF p OR q
will always short circuit if p is TRUE. The documents confirm that this is also true for CASE and for COALESCE and DECODE (although DECODE is not available in PL/SQL).
Charles Wetherell, in his paper "Freedom, Order and PL/SQL Optimization," (available on OTN) says that "For most operators, operands may be evaluated in any order. There are some operators (OR, AND, IN, CASE, and so on) which enforce some order of evaluation on their operands."
My questions:
(1) In his list of "operators that enforce some order of evaluation," what does "and so on" include?
(2) Is short circuit evaluation ALWAYS used with Boolean expressions in PL/SQL, even when they the expression is outside one of these statements? For example:
boolvariable := p OR q;
Or:
CALL foo(p or q);This is a very interesting paper. To attempt to answer your questions:-
1) I suppose BETWEEN would be included in the "and so on" list.
2) I've tried to come up with a reasonably simple means of investigating this below. What I'm attempting to do it to run a series of evaluations and record everything that is evaluated. To do this, I have a simple package (PKG) that has two functions (F1 and F2), both returning a constant (0 and 1, respectively). These functions are "naughty" in that they write the fact they have been called to a table (T). First the simple code.
SQL> CREATE TABLE t( c1 VARCHAR2(30), c2 VARCHAR2(30) );
Table created.
SQL>
SQL> CREATE OR REPLACE PACKAGE pkg AS
2 FUNCTION f1( p IN VARCHAR2 ) RETURN NUMBER;
3 FUNCTION f2( p IN VARCHAR2 ) RETURN NUMBER;
4 END pkg;
5 /
Package created.
SQL>
SQL> CREATE OR REPLACE PACKAGE BODY pkg AS
2
3 PROCEDURE ins( p1 IN VARCHAR2, p2 IN VARCHAR2 ) IS
4 PRAGMA autonomous_transaction;
5 BEGIN
6 INSERT INTO t( c1, c2 ) VALUES( p1, p2 );
7 COMMIT;
8 END ins;
9
10 FUNCTION f1( p IN VARCHAR2 ) RETURN NUMBER IS
11 BEGIN
12 ins( p, 'F1' );
13 RETURN 0;
14 END f1;
15
16 FUNCTION f2( p IN VARCHAR2 ) RETURN NUMBER IS
17 BEGIN
18 ins( p, 'F2' );
19 RETURN 1;
20 END f2;
21
22 END pkg;
23 /
Package body created.Now to demonstrate how CASE and COALESCE short-circuits further evaluations whereas NVL doesn't, we can run a simple SQL statement and look at what we recorded in T after.
SQL> SELECT SUM(
2 CASE
3 WHEN pkg.f1('CASE') = 0
4 OR pkg.f2('CASE') = 1
5 THEN 0
6 ELSE 1
7 END
8 ) AS just_a_number_1
9 , SUM(
10 NVL( pkg.f1('NVL'), pkg.f2('NVL') )
11 ) AS just_a_number_2
12 , SUM(
13 COALESCE(
14 pkg.f1('COALESCE'),
15 pkg.f2('COALESCE'))
16 ) AS just_a_number_3
17 FROM user_objects;
JUST_A_NUMBER_1 JUST_A_NUMBER_2 JUST_A_NUMBER_3
0 0 0
SQL>
SQL> SELECT c1, c2, count(*)
2 FROM t
3 GROUP BY
4 c1, c2;
C1 C2 COUNT(*)
NVL F1 41
NVL F2 41
CASE F1 41
COALESCE F1 41We can see that NVL executes both functions even though the first parameter (F1) is never NULL. To see what happens in PL/SQL, I set up the following procedure. In 100 iterations of a loop, this will test both of your queries ( 1) IF ..OR.. and 2) bool := (... OR ...) ).
SQL> CREATE OR REPLACE PROCEDURE bool_order ( rc OUT SYS_REFCURSOR ) AS
2
3 PROCEDURE take_a_bool( b IN BOOLEAN ) IS
4 BEGIN
5 NULL;
6 END take_a_bool;
7
8 BEGIN
9
10 FOR i IN 1 .. 100 LOOP
11
12 IF pkg.f1('ANON_LOOP') = 0
13 OR pkg.f2('ANON_LOOP') = 1
14 THEN
15 take_a_bool(
16 pkg.f1('TAKE_A_BOOL') = 0 OR pkg.f2('TAKE_A_BOOL') = 1
17 );
18 END IF;
19
20 END LOOP;
21
22 OPEN rc FOR SELECT c1, c2, COUNT(*) AS c3
23 FROM t
24 GROUP BY
25 c1, c2;
26
27 END bool_order;
28 /
Procedure created.Now to test it...
SQL> TRUNCATE TABLE t;
Table truncated.
SQL>
SQL> var rc refcursor;
SQL> set autoprint on
SQL>
SQL> exec bool_order(:rc);
PL/SQL procedure successfully completed.
C1 C2 C3
ANON_LOOP F1 100
TAKE_A_BOOL F1 100
SQL> ALTER SESSION SET PLSQL_OPTIMIZE_LEVEL=0;
Session altered.
SQL> exec bool_order(:rc);
PL/SQL procedure successfully completed.
C1 C2 C3
ANON_LOOP F1 200
TAKE_A_BOOL F1 200The above shows that the short-circuiting occurs as documented, under the maximum and minimum optimisation levels ( 10g-specific ). The F2 function is never called. What we have NOT seen, however, is PL/SQL exploiting the freedom to re-order these expressions, presumably because on such a simple example, there is no clear benefit to doing so. And I can verify that switching the order of the calls to F1 and F2 around yields the results in favour of F2 as expected.
Regards
Adrian -
How to generate test data for all the tables in oracle
I am planning to use plsql to generate the test data in all the tables in schema, schema name is given as input parameters, min records in master table, min records in child table. data should be consistent in the columns which are used for constraints i.e. using same column value..
planning to implement something like
execute sp_schema_data_gen (schemaname, minrecinmstrtbl, minrecsforchildtable);
schemaname = owner,
minrecinmstrtbl= minimum records to insert into each parent table,
minrecsforchildtable = minimum records to enter into each child table of a each master table;
all_tables where owner= schemaname;
all_tab_columns and all_constrains - where owner =schemaname;
using dbms_random pkg.
is anyone have better idea to do this.. is this functionality already there in oracle db?Ah, damorgan, data, test data, metadata and table-driven processes. Love the stuff!
There are two approaches you can take with this. I'll mention both and then ask which
one you think you would find most useful for your requirements.
One approach I would call the generic bottom-up approach which is the one I think you
are referring to.
This system is a generic test data generator. It isn't designed to generate data for any
particular existing table or application but is the general case solution.
Building on damorgan's advice define the basic hierarchy: table collection, tables, data; so start at the data level.
1. Identify/document the data types that you need to support. Start small (NUMBER, VARCHAR2, DATE) and add as you go along
2. For each data type identify the functionality and attributes that you need. For instance for VARCHAR2
a. min length - the minimum length to generate
b. max length - the maximum length
c. prefix - a prefix for the generated data; e.g. for an address field you might want a 'add1' prefix
d. suffix - a suffix for the generated data; see prefix
e. whether to generate NULLs
3. For NUMBER you will probably want at least precision and scale but might want minimum and maximum values or even min/max precision,
min/max scale.
4. store the attribute combinations in Oracle tables
5. build functionality for each data type that can create the range and type of data that you need. These functions should take parameters that can be used to control the attributes and the amount of data generated.
6. At the table level you will need business rules that control how the different columns of the table relate to each other. For example, for ADDRESS information your business rule might be that ADDRESS1, CITY, STATE, ZIP are required and ADDRESS2 is optional.
7. Add table-level processes, driven by the saved metadata, that can generate data at the record level by leveraging the data type functionality you have built previously.
8. Then add the metadata, business rules and functionality to control the TABLE-TO-TABLE relationships; that is, the data model. You need the same DETPNO values in the SCOTT.EMP table that exist in the SCOTT.DEPT table.
The second approach I have used more often. I would it call the top-down approach and I use
it when test data is needed for an existing system. The main use case here is to avoid
having to copy production data to QA, TEST or DEV environments.
QA people want to test with data that they are familiar with: names, companies, code values.
I've found they aren't often fond of random character strings for names of things.
The second approach I use for mature systems where there is already plenty of data to choose from.
It involves selecting subsets of data from each of the existing tables and saving that data in a
set of test tables. This data can then be used for regression testing and for automated unit testing of
existing functionality and functionality that is being developed.
QA can use data they are already familiar with and can test the application (GUI?) interface on that
data to see if they get the expected changes.
For each table to be tested (e.g. DEPT) I create two test system tables. A BEFORE table and an EXPECTED table.
1. DEPT_TEST_BEFORE
This table has all EMP table columns and a TEST_CASE column.
It holds EMP-image rows for each test case that show the row as it should look BEFORE the
test for that test case is performed.
CREATE TABLE DEPT_TEST_BEFORE
TESTCASE NUMBER,
DEPTNO NUMBER(2),
DNAME VARCHAR2(14 BYTE),
LOC VARCHAR2(13 BYTE)
2. DEPT_TEST_EXPECTED
This table also has all EMP table columns and a TEST_CASE column.
It holds EMP-image rows for each test case that show the row as it should look AFTER the
test for that test case is performed.
Each of these tables are a mirror image of the actual application table with one new column
added that contains a value representing the TESTCASE_NUMBER.
To create test case #3 identify or create the DEPT records you want to use for test case #3.
Insert these records into DEPT_TEST_BEFORE:
INSERT INTO DEPT_TEST_BEFORE
SELECT 3, D.* FROM DEPT D where DEPNO = 20
Insert records for test case #3 into DEPT_TEST_EXPECTED that show the rows as they should
look after test #3 is run. For example, if test #3 creates one new record add all the
records fro the BEFORE data set and add a new one for the new record.
When you want to run TESTCASE_ONE the process is basically (ignore for this illustration that
there is a foreign key betwee DEPT and EMP):
1. delete the records from SCOTT.DEPT that correspond to test case #3 DEPT records.
DELETE FROM DEPT
WHERE DEPTNO IN (SELECT DEPTNO FROM DEPT_TEST_BEFORE WHERE TESTCASE = 3);
2. insert the test data set records for SCOTT.DEPT for test case #3.
INSERT INTO DEPT
SELECT DEPTNO, DNAME, LOC FROM DEPT_TEST_BEFORE WHERE TESTCASE = 3;
3 perform the test.
4. compare the actual results with the expected results.
This is done by a function that compares the records in DEPT with the records
in DEPT_TEST_EXPECTED for test #3.
I usually store these results in yet another table or just report them out.
5. Report out the differences.
This second approach uses data the users (QA) are already familiar with, is scaleable and
is easy to add new data that meets business requirements.
It is also easy to automatically generate the necessary tables and test setup/breakdown
using a table-driven metadata approach. Adding a new test table is as easy as calling
a stored procedure; the procedure can generate the DDL or create the actual tables needed
for the BEFORE and AFTER snapshots.
The main disadvantage is that existing data will almost never cover the corner cases.
But you can add data for these. By corner cases I mean data that defines the limits
for a data type: a VARCHAR2(30) name field should have at least one test record that
has a name that is 30 characters long.
Which of these approaches makes the most sense for you? -
How can I import data in to the digital word generator in Multisim?
How can I import data in to the digital word generator in Multisim?
I just received this comment from a friend, a RADAR engineer, who has just down loaded Multisim. He has been using HP/Agilent software. He has a work around using a piecewise linear voltage waveform with data imported from Excel but this is not really a good solution. It would also be helpful to import data from Mathcad or equivalent.
"I thought I was about to be impressed with MultiSim but it ended only in disappointment. There is a word generator in the simulation instrument panel which can drive the DAC with a waveform and it can have thousands of lines of values. I opened Excel, wrote the formula to generate the time and voltage points for a chirp, converted to DAC values in Hex and then went back to the word generator in MultiSim to load the values only to find that you have to enter each value manually. It doesn’t even allow you to paste in a list of values from a text file. I’m not going to type 5000 values by hand. If you get the chance to give feedback to National Instruments please ask them if the paste option can be added to the word generator. MultiSim is useful in many regards, but in this case, it left me with the impression that it is considerably limited in capability compared to what I’m used to."Hi,
You can load your data automatically in the Multisim word generator. Follow these steps:
- Save your data file (in excel .xslx ir .csv format) on your computer
- Change the extension of the file to ".dp"
- Double-click the word generator in Multisim and click on Set...
- In the Settings dialog box, click on Load and then Accept
- This will prompt you to select the .dp file you have on your computer, select it and you're good to go
However, in Multisim you have the option of creating your own custom simulation analysis and instrument.
I will try creating the instrument and send it back to you but it might take some time.
Multisim and LabVIEW are very powerful in test automation, with the custom instruments you create for Multisim you don't need to export your data file into excel from LabVIEW (or MathCAD or other tools) and then reload it into Multisim. The test procedure is automated instead.
Please check this reference design about automated simulation
http://zone.ni.com/devzone/cda/tut/p/id/7825
Here is how you can create your own custom measurement tool in Multisim and LabVIEW, but as I mentioned, I will create the word generator and come back to you anyways
http://zone.ni.com/devzone/cda/tut/p/id/5635
Let me know if you have any questions.
Mahmoud W
National Instruments -
How to test the extended controller and AM Object
I added a button in a OAF page and extended the CO and AM. When we click this button, an XML Publisher concurrent program should be called and the PO will be displayed in PDF format.
What is the best way to test this. Should I copy all the libraries from oracle/apps/... to my pc and attach or can I test without doing this. If so, what is the best approach.
I am new to this.
Thanks for your Help.
HPI got the code ready for with extended Co java file. How can test this. How to generate the class file. Once I gerenarate the class file, if I ftp the file to cust/oracle/apps/pos/changeorder/webui and in the personalization if I attach this CO and run the page and click the button will this call the new CO? Yes after FTP the Contorller class file to appropriate path and attach the extended controller through personalization New Controller will get called.
But DO REMEMBER TO BOUNCE THE APPACHE SERVER then only the extended Controller file get called.
here is the link having steps to bounce it for Apps 11i.
http://oracleanil.blogspot.com/2009/04/ncrimmessageproc.html
Please let me know if these steps are correct..
1. New OAWorkSpace
2. New Project
3. right click on project and click New class to generate the extended CO Class.Yes, It is right..
Thanks
--Anil
http://oracleanil.blogspot.com -
How to modify the SQL being generated from BC, to fix the issue
Hi,
We have seen a strange issue in our implementation.The issue is also reproducible in Vanilla environment.
In Contact List Applet, if we Query in First Name or Last Name fields in UI, the Query being generated is,showing that, Siebel is Querying for first name in S_POSTN_CON.CON_FST_NAME. This is a normalized column for S_CONTACT.FST_NAME.
This is causing the performance issue.
When I check the configuration in Tools for Contact BC's First Name field, it is configured as follows.
Join = S_CONTACT
Column = FST_NAME.
I do not understand, Why it is still querying in S_POSTN_CON.
Any suggestions on how to fix this issue to make the Query to be performed on S_CONTACT.FST_NAME?
Regards
VamshiHi Vamshi,
As Robert mentioned, there just happens to be a number of things that need to be analyzed prior to changing the shape of the buscomp that triggers that sql.
If this siebel performance issue occurs on a production environment, you should certainly look at the performance trend/characteristics of that sql over time and assess its the impact on your business community (...), then carefully identify its -true- root-cause, implement a fix and validate it against a production-like environment in order to verify there is no regression associated with it; once the fix is deployed on your production system, you want to monitor its benefit overtime and on a 24x7 basis…all this may sound very generic yet good practices.
If you are looking at -effectively- solving this siebel peformance issue (and others...) in a timely manner, best is to have your Siebel Teams 1)use a Siebel-specific performance monitoring software technology built by Siebel Architects (like GMT v1.8.5, more info @ www.germainsoftware.com) that is able to collect 24x7 all the data needed for root-cause analysis(and more..), and 2)have senior siebel architects (like Robert's team) that have successfully solved tones and severe performance and scalability issues for many years, provide technical guidance to your team throughout the resolution process.
Siebel CRM is a great CRM software solution that is very complex. Every "switch you turn on/off", every customization you built into it may generate performance issues if it is not carefully implemented, optimized, tested...and monitored 24x7 once it is deployed onto your production system.
Good luck w/ this..
Regards,
Yannick Germain
CEO & Founder
GERMAIN SOFTWARE llc
Complete Siebel Performance Monitoring Tool
21 Columbus Avenue, Suite 221
San Francisco, CA 94111, USA
Cell: +1-415-606-3420
Fax: +1-415-651-9683
[email protected]
http://www.germainsoftware.com -
How do we simulate an event to test the open SR function
Do we have details on how to simulate:
An event on a device so the customer can test/experience how alert process works and….
An event on a device so the customer can test/experience the alert with SR create process.
The Deployment Guide states we can simulate alert and refers to the configuration guide by device.
The device in question is the ASR9K . Where in the configuration guide do we show “Testing Cisco TAC case creation with a simulated fault(s)”
TimHi!
I can't get this demo SR to work. I check "demo SR" checkbox and generate an Inventory. I do recieve the Inventory notification in my mailbox, but no SR is created.
You mention above that any type of call home message will generate this demo SR, but the only doumentation I find regarding demo SR says something else. On page 16 on the new "Smart Call Home Deployment Guide" I find this:
Step 7. Click Submit. When a Call Home message [that generates a service request]
is sent, a demo service request will be generated.
So two questions:
Why isn't a demo SR created?
Does any call home message rasie a demo SR (if checkbox checked...) or does it require a call home message that would have triggered a real SR?
Maybe you are looking for
-
Month view, week view and working week view in outlook calendar in wpf
Hello everybody! i was posted http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/7d927ca0-a110-4ede-bb2c-fa0070625722/ here how to make Outlook calendar sheet view , and tried http://www.codeproject.com/Articles/30881/Creating-an-Outlook-Calend
-
Why is Motion so much quicker than FCP doing Steady Cam?
Being a helicopter operator I keep doing projects that require steady cam. I used to use Shake, then Motion, and now FCP with the additon of steadycam but I am wondering why Motion is so much quicker than FCP with the same clip. Maybe a dumb question
-
Most media stuck on "media pending", apparently indefinitely
I'm trying to export a queue of PrPro sequences in Adobe Media Encoder CC. When I click on Export Settings and see the preview of the output video, most or all of the media is hung up on the "media pending" placard, or sometimes the "offline" placard
-
Do I have go keep the music on iTunes once I have it on my iPhone as its slowing down my computer
Do I have go keep the music on iTunes once I have it on my iPhone as its slowing down my computer
-
SAP IS Retail - Table for Merchandise Hierarchy, Merc Hierarchy assignments
HI, I am looking for table details where (1) Merchandise Hierarchy, (2) Merchandise Hierarchy assignments and (3) Merchandise Hierarchy to Merchandise Category assignment will be stored. Regards