DSC plantwide architecture
I would like to get some opinions. I am developing a large DSC
application (LV 8.2 hopefully), that will utilize perhaps 20 different
"process computers" running LV DSC, and logging the data back to a
central "server" containing the Citadel database. Each of these
process computers will have a unique set of shared variables defined,
and a unique process name. Each process will be monitoring
perhaps 50 shared variables (say).
What is better:
a) logging ALL processes back to a single database, OR
b) having one database for each process (so the server has 20 databases running concurrently)
I am looking for opinions in terms of PERFORMANCE but also
MAINTAINABILITY. Is there any advantage/disadvantage to either
method, or are they equally good?
FYI, I will be developing a custom historical data viewing application
that will reside on the server, and users from the corporate network
will utilize Remote Panel to use it. The historical data viewer
will have to be able to access data from all processes at the same time
(for example, for comparing performance curves of two different process
centres).
Thanks,
David Moerman
TruView Technology Integration
Thanks for your ideas.
I like the idea of having the 20 DBs distributed on different machines
but mirroring that data back to a central server. Sounds great
for redundancy. Problem is, I don't know how to do that with DSC
or any other method for that matter.
As for server failure, I could do several things. First of all,
I've already tested (in 7.1) what would happen if the Citadel database
"unplugs" while a process is running (that is, say the server suddenly
loses power). The process computers can still run normally.
(Perhaps there will be an error on the chain that I could trap, but I
didn't look that hard). Also, at the time I could bring the
server back on line and it just started logging again as if nothing
happened, except for a gap in the data. This is where local
caching might be a good idea -- I'll keep that in mind.
Also thinking of server failure: we could use RAID to continously
mirror the server HD, as well as using a scheduled event to backup the
database folder to somewhere else on the corporate network.
Should work I think.
BTW, this is a paint mixing plant. So I will be using "data sets"
to store batch-oriented data. As for DSC run-time licences, I'm
not worried -- NI is still much cheaper on this than their many
competitors in this field! But that's because DSC is, shall we
say, "less proven" than the alternatives like Wonderware, etc...
I will not be acquiring super-fast, but we could expect 1 data point
per second. Still, over many years the database will get
big. And yes, having all the data on a central server is done so
outside clients can access simply 1 single location. Also, it
becomes a convenient dividing point between the process network and the
corporate LAN. I was thinking of 2 ethernet cards in the server.
Well so far I haven't yet heard any technical preference over 1 big DB
vs 20 smaller DBs (albeit both on the same server). And if anyone
knows how to do automatic DB mirroring with DSC...
Thanks again,
-Dave
Similar Messages
-
LV7.1 DSC tag engine VS LV8.6 DSC shared variables
I'm currently running LV7.1 with DSC and RT. To handle communications and logging RT variables I'm using the init / read / write publish.vi's on the RT side and datasockets on the HMI side (Windows XP). This has worked out great - new tags can be programmatically created in real time with the publsih vi's and then I go to the the .scf file and use the tag configuration wizard to add them to my scf file and handle data logging. This worked very well - the wizard would organize all of the memory tags into folders by block name used by the init publish vi. I could also select entire groups of tags and add hundreds at a time to the .scf file. Hardware Tag also worked in a similar fashion, organizing tags by controller and module folders. Now - looking at LV8.6.I found I can still use the init / read / publish vi's on the RT side - great. However there is not tag configuration editor as in LV7.1 to let me add large numbers of tags through a wizard. The closest thing I've found is to create a library to represent each block name from the RT init publish.vi then use "create bound variables" option under the library to bind the new shared variables to the RT memory tags. I can browse to the tags on the controller by network items, but when I add them it doesn't bring the block name of the tag as it did in 7.1, only the item name. I use a lot of PID loops that share the same tag names (i.e.: P,I,D, mode, output), so not including the block name represents an organizational problem. The problem with this is, it's very labor intensive compared to the wizard in LV7.1 DSC, especially talking about creating systems with thousands of RT memory tags. Also, there is a similar problem with hardware channels (I'm using compact FieldPoint). To log channels via DSC do I have to create a shared variable for each channel to access the DSC logging capabilities? Again how do I add all of the hardware channels in some organized fashion? I hope I'm missing some tool that is an analog to the tag configuration wizard to bring in these channels and organize them. Any help or suggestions would be appreciated. Thanks,Brad
Hi lb,
We're glad to hear you're upgrading, but because there was a fundamental change in architecture since version 7.1, there will likely be some portions that require a rewrite.
The RTE needs to match the version of DSC your using. Also, the tag architecture used in 7.1 is not compatible with the shared variable approach used in 2012. Please see the KnowledgeBase article Do I Need to Upgrade My DSC Runtime Version After Upgrading the LabVIEW DSC Module?
You will also need to convert from tags to shared variables. The change from tags to shared variables took place in the transition to LabVIEW 8. The KnowledgeBase Migrating from LabVIEW DSC 7.1 to 8.0 gives the process for changing from tags to shared variables.
Hope this gets you headed in the right direction. Let us know if you have more questions.
Thanks,
Dave C.
Applications Engineer
National Instruments -
Poor response of LV 8.0.1 & DSC with RSLinx
Hi,
I'm having some flakey problems with response times using LV DSC 8.0.1 reading shared variables connected to RSLinx 2.50. The aim here is to see if LV DSC is sufficiently robust enough to use it as an HMI on a large project. (I've previously used RSView32 without any mentionable problems on a 5000+ tag system)
The arrangements are:
ControlLogix PLC running sample code being scanned via 100Mb Ethernet connection from RSLinx Gateway 2.50.00.20 (CPR7)
LabVIEW 8.0.1 with DSC module 8.0.1
Shared variables linked to RSLinx Gateway via Topic
Sample front panel with BOOL buttons, Double and BOOL indicators etc.
A second computer running Server Explorer 2.4.1 to look at OPC tags (SV) from the SV engine and RSLinx
OPC scan rates set to 100ms placing on scan about 30 tags (to use PLC parlance!)
After a few false starts largely to do with data types I have the systems running but am experiencing somewhat flakey performance comparing button presses to indicator illumination.
There seems to be a delay in relaying all types of data to the front panel of somewhere between 4-10seconds or more. This has been confirmed by using RSLogix 5000 to look at the ladder code and RSLinx Data Monitor to monitor the PLC tags. BOOL button writes are too quick to measure so I know it's not a problem of sending the information. This response seems to be corrected by means of a reboot but not all the time!
Are there any Shared Variable issues with LabVIEW 8.0.1 and DSC 8.0.1 that are cured by a patch or by an upgrade to 8.2?
Can I monitor any performance parameters or memory use to helpl diagnose the problem?
Thanks...Andy
Lead Engineer, Controls and Electrical
Irvine CAWriting this purely for the record!
We found that the slowdown problem was random and was either non-existent or at maximum 14 seconds. The problem was "cured" by moving the Shared Variable engine to another computer, leaving RSLinx running on the original computer. There seemd to be no delay in 1-day of testing but I may be proved wrong!
Our architecture (which we'd intended to do) will therefore be RSLinx 2.50 on a Control server, LabVIEW DSC and the Shared Variable Engine on another. Clients will then connect to the SVE to pull off HMI displays and to view data.
End of Topic
Lead Engineer, Controls and Electrical
Irvine CA -
I want to Host my Shared Variables on a cRIO, but use DSC for Logging/Alarming/SCADA
Hi everyone,
What I'm trying to do is this:
-Host shared variables on my RT targets (cRIO-9022) for reliability
-Use the DSC module to log to database, track Alarms, develop distributed HMI
The problem I'm running into is that the DSC engine (it seems) needs the shared variables it is monitoring to be hosted on that computer. The DSC engine can not run on a real-time target.
My end goal is to create a plant-wide network of cRIO's that are all linked back to a central server/PC that runs DSC to collect and stores data in a database. The plant HMI would also connect to the central server and get their information from there (not directly connected to the cRIO process). Is this possible/does any one have ideas on how to do this efficiently?
Thanks for the help.
--CLD--
LV 6.1, 8.6.1, 2011 SP1, 2012 SP1
Solved!
Go to Solution.Sachsm,
Thanks for the input. I tried to create a test project for this type of architecture (bound NSV's) but am running into some errors.
I have attached a screenshot of the project and front panel showing the binding status of each variable **see attached picture**
Hosted on PC:
-Clone (Variable1) ---- This is bound to Variable1 on cRIo using the "Create Bound Variables" option in the Library
-Variable3
Hosted on cRIO
-Variable1
As you can see, when I drag variable 1 directly onto the PC front panel, the variable connects (indicator is green). Likewise, when I host Variable3 on the PC and drag it to the front panel, it connects. However, when I drag the Clone (variable bound to Variable1 on cRIO) onto the front panel, it cannot connect. Any thoughts?
--CLD--
LV 6.1, 8.6.1, 2011 SP1, 2012 SP1
Attachments:
Binding Error.jpg 127 KB -
I though I'd take a moment to start a wish list and collect some of my comments on LV DSC. I have become somewhat of a super-user of DSC recently, and have a lot of feedback for any NI people who are following these pages.
Overall, I am very pleased with DSC 6.1, but have a few ideas. I have already mentioned some of these ideas in the Discussion Forums, but here's a collection:
--Speed it up! The DSCEngine and other tools are pretty big and slow. They take a long time to load and run. It'd be great if the tools could start faster (particularly the DSCEngine and the Tag Configuration Editor).
--Double-click on an SCF file and it opens in the Tag Configuration Editor automatically.
--Option-click on an SCF file and allow the user to choose "Launch Engine With This SCF file".
--Don't force users to run compiled EXEs that contain DSC VIs in the DSC Run-Time folder. It'd be nice if we could run them in a different location.
--Get rid of that dialog that pops up when you shutdown the DSC and you have a client connected to the OPCLabVIEW OPC Server: "There are x OPC Clients attached to LabVIEW IA OPC Server. Do you still want to quit?". Or allow it to be disabled.
--Enable Tag Engine to run as an NT Service. If the power goes out, it will allow the Engine to start again when the PC restarts (without enabling automatic login to windows).
--This is a complicated one: Allow users to connect to DSTP or OPC items that are "arrays of doubles" and log them to citadel as an ordinary analog tag. For example, we have a DSTP URL that contains an array of doubles. These are a series of temperature values at time t0, t1, t2, t3. It is periodically updated with subsequent values t4, t5, t6, t7... and so on. On that URL, we also have an attribute (another array of doubles) that contains the timestamps for each value. It'd be cool if the DSC Engine could read this URL and log it straight to Citadel. Right now we have our own device driver that reads the values and adds them to the input queue... but it has some performance problems.
--Allow users to disable the splash screen that appears when starting the DSCEngine.
--Allow different alarms on string tags. For example, allow the user to setup an alarm that occurs when a string tag equals "error" or something like that. Right now we can only set Bad Status alarms on string tags.
--Allow the user to open the Tag Configuration Editor without totally opening the Run-Time System. It'd be nice if the editor were a fast, little app like the Tag Monitor that could be run without opening DSC.
--How about changing the name? Heh heh... LabVIEW DSC gets a bit confusing to say. Everyone here keeps calling it LabVIEW DCS. Also, I think the word "datalogging" confuses a lot of people. You see a lot of posts in the Dicussion Forums about "datalogging" in general... but have nothing to do with the DSC toolkit itself. Why not LabVIEW SCADA?Hi Helene,
I'm not sure what you mean by "roadmap". Are you looking for a general "architecture map" that explains the different software components and how they communicate with each other? The best way to get an overview would be to read the user's manuals and various NI web pages. I'd start here:
http://sine.ni.com/apps/we/nioc.vp?cid=10418〈=US
You will find a whole bunch links on this page that provide info on LV DSC. For example, checkout the application notes on DSC:
http://sine.ni.com/apps/we/nioc.vp?pc=res&cid=10418〈=US&p_0=appnote
And the online demonstration videos at:
http://digital.ni.com/demo.nsf/websearch/2484F51992BC939B86256ABD005245A6?OpenDocument&node=10418_US
And also see the Getting St
arted manual:
http://digital.ni.com/manuals.nsf/websearch/F5984D148CAF106686256AF500569F91?OpenDocument&node=10418_US
Or are you asking about a "plan for the future" roadmap? A roadmap that would explain product developments and plans to address some of the items in my "Wish List" above? If so, that question could only be answered by NI ... and they would probably only give you very vague hints. -
Shared variable architecture for distributed crio system
I have a distributed system consisting of 20 cRIOs and one central server running Windows XP. Each cRIO is a mix and match of AI, AO, DO, DI and TC modules. There are around 200 AI and TC channels and close to 100 AO, DO and DI channels. I need to acquire data from all the AI & TC channels at 10 Samples/sec and log them in the Server. In some cRIOs I have 2 to 3 PID loops, in addition to the data acquisition or generation code. Since a cRIO chassis can have only eight modules, certain PID loops will have to use data of channels that are available in other cRIOs. cRIOs can be turned ON and will be running even without the control server, but facility and test will be started only through the control server.
We have decide to use LabVIEW DSC for this application and shared variable for data transfer, logging and alarm. We are using the following SV architecture.
1. Each cRIO will have its own RT shared variables for AI, AO, DI, DO and TC parameters
2. Each AI and AO parameter shared variable in RT will have its corresponding host shared variable which will be bound to the RT Shared variable. In host shared variable we are enabling logging and alarm option for variables corresponding AI parameters.
3. In cRIO RT, data is read from the FPGA (using FIFO every 500ms, and we get 5 Samples for each channel) in binary format and scaling is done. After scaling we are updating the corresponding AI RT shared variables using DataSocket write in a loop. In any cRIO there will be only a maximum of 32 AI or TC parameters.
4. The server has a mimic panel where we need to display the status of DIs and allow the user to turn ON and OFF DOs. For the we decided to use Front panel DataSocket binding.
I would like to know whether there is any better shared variable architecture that we can follow. Considering the number of variable that we are using, will the current architecture cause any performance issues?
Thanks!
"A VI inside a Class is worth hundreds in the bush"
യവന് പുലിയാണു കേട്ടാ!!!Hi Mike,
Looks like you are lucky as you can make use of the great feature available in 8.6. Over here I have 8.5.1 so got to do some of the things the 'hard' way But I must say, 8.5.1 is the so far the best version of LabVIEW I have worked after 7.1 as the crashes are less and less frequent.
BTW I faced some problems directly dragging and dropping an RT SV node to Windows. If the RT code is running and if you try to run the Windows code, it was asking to stop the code running in RT. That's one reason I had to duplicate some RT Shared variable in host and bind them to corresponding RT Shared variables.
This is the first time I am handling this many number of shared variables. In case I come across some issues, I will post them here.
"A VI inside a Class is worth hundreds in the bush"
യവന് പുലിയാണു കേട്ടാ!!! -
Looking for similar VIs in DSC 7.0 to upgrade to DSC 8.5
Hi,
I am working on upgrading a project from DSC 7.0 to DSC 8.5. In the old project, I see they used the VI relating to the VI Server Development Toolkit which includes two sets of VIs. They are Server Registration VIs and Server Interface VIs. These are the old VIs. I think they were from 1997. Can anyone tell me if there is similar VI or what is the replace techique implemented for these VIs.
Thank you for your time reading and answersing my question.
Thang NguyenHi Thang,
VI-based servers created in the LabVIEW Datalogging and Supervisory Control (DSC) Module 7.x are supported in the DSC Module 8.X. However, National Instruments recommends that you migrate to the new custom I/O server architecture because VI-based servers are a legacy feature of the DSC Module. There is a great knowledge-base article that describes the sets in detail of how you can go about doing this. Also, there are great examples in the NI Example Finder that will help you set up your I/O servers with DSC 8.5. You can also find more information in the LabVIEW help. I hope this helps!
Carla
National Instruments
Applications Engineer -
Hand - Off - Auto / Remote - SCADA Architecture Question *All You Super Users Read This*
I am new to the forum. I have searched the net for almost 6 months on this and haven't really found a good response....
I have been around the block a few times with various control systems(Arduino/Labview/AllenBradley/general SCADA) and am trying to figure out how to build a "proper" control system on the cheap. I used to use alot of Labview in School so that is how I ended up here.
What I want to do is have Hand Control at the Machine (Arduino Control System / or something) be able to switch that to Auto/Remote (Labview or some other language) then have that talk to SCADA (Thingspeak, SQL, havent figured this part out yet).
My question is how would you do it? I am a pretty good programmer and know just about every language so nothing is off the table. Seriously I would like to see some creative action.
I dont think LIFA is the best cause I need LOCAL control of all the variables. I would like to have PID loops etc running on the Arduino. I need to be controlling serial attached machines. I have some RS-485/232 going to VFDs and a dSPiN as well.
However, when I want to control it remotely, I would like to be able to pass set points if in auto mode or even better, in addition, be able to override auto and drill down into each attached component and control it in Hand remotely.
I know I could do this by building everything in VIs then linking through LIFA and using the VI as my hand/auto. However, I do not trust the link between LIFA and the Arduino that much. Also I do not know the capability of LIFA to do the RS-485 and serial data to the dSpiN. I need to be on Ethernet/Wifi, and if the controller goes down we may have a legitimate safety problem; hence why I want local control as my backup.
Further more how would you do the SCADA? I dont want to spend 4K on a Full Labview License.....I would prefer just using the iPad app and some free online services...I will have a server running all the time as well.
If LIFA can be used to do some of this please let me know....Any links to past projects by people would be a huge help.
My Plan right now is:
Machine Data/Control sent to Arduino over RS-485 , 4-20ma and SPI.
Local Control on the Arduino. Local Arduino connected LCD displays Key Parameters.
Send Local Main Variables *Set Point / Run / Stop Etc / Key Data* over Serial or Hacked LIFA to Labview over ethernet or wi-fi.
Labview acts as main supervisory "go-between" and provides all main "in house" HMI support
Real time proccesing happens in Labview and is displayed on labview VIs; Realtime data is also sent out remotely over a service for world wide access.
Labview also sends all real time data to historical service of some kind either (web based or local based) (will be locally hosted)
Access historical data remotely by some means (API / Webservice)
Remote World Wide Access to Labview and thus machine, through Labview iPad app or VPN into house.
World Wide Historical Access through API or Webpage.
That is where I am at in my thinking. Please blow it up and make it better.
I appreciate y'alls help. I posted this in the Arduino Section and they told me to post here.
~ColinHello Colin,
Unfortunately I am not familiar with the overall architecture but, if you need to build a SCADA type system, you can use LabVIEW Datalogging and Supervisory Control (DSC) Module. It’s an add-on to LabVIEW and a lower cost alternate option to other SCADA programs out there.
The LabVIEW software is not a free tool, but will save you time developing your project. Also if you use one provider, your support request will be handled from one source, therefore making your development process more efficient.
Please refer to the following links in case you are interested in evaluating the LabVIEW Software and LabVIEW Datalogging and Supervisory Control (DSC) Module.
http://www.ni.com/trylabview/
http://sine.ni.com/nips/cds/view/p/lang/en/nid/210561
Regards
Luis S
Application Engineer
National Instruments -
Powershell DSC - xSQLServer -- xSQLServerSetup error.
Hi All,
I've been trying to automate the installation of SQL Server using the experiment DSC Module for SQL Server. This issue occurs in my vagrant environments and vCenter environments.
This is the entirety of the script that does the meat and potatoes of the install.
#use xSQLServerSetup of xSQLServer
#sql install error log can be found at C:\Program Files\Microsoft SQL Server\120\Setup Bootstrap\log
param (
[string]$SetupCredA = $(throw "error -SetupCredA is required. Need installer username."),
[string]$SetupCredPW = $(throw "error -SetupCredPW is required. Need installer password."),
[string]$SvcAccountCredA = $(throw "error -SvcAccountCred is required. Need to know service account."),
[string]$SvcAccountPW = $(throw "error -SvcAccountCred is required. Need to know service account pw."),
[string]$Features = $(throw "error -Features is required."),
[string]$SAArray = $(throw "error -SAArray is required. Need to know sysadmins.")
$SetupCred=New-Object System.Management.Automation.PSCredential ($SetupCredA, $(ConvertTo-SecureString $SetupCredPW -AsPlainText -Force))
$SvcAccountCred=New-Object System.Management.Automation.PSCredential ($SvcAccountCredA, $(ConvertTo-SecureString $SvcAccountPW -AsPlainText -Force))
$Extract="C:\InstallSQL"
$ServerCoreFeatures="SQLENGINE,REPLICATION,FULLTEXT,AS,CONN,IS,SNAC_SDK"
$ServerGUIFeatures="SQLENGINE,REPLICATION,FULLTEXT,DQ,AS,DQC,CONN,IS,BC,SDK,BOL,SSMS,ADV_SSMS,SNAC_SDK,MDS,DREPLAY_CTLR,DREPLAY_CLT"
$ServerGUIFeaturesWithReporting="$ServerGUIFeatures,RS"
$ReportServerFeatures="RS"
switch ($Features) {
"ServerCoreFeatures" { $Features = $ServerCoreFeatures }
"ServerGUIFeatures" { $Features = $ServerGUIFeatures }
"ServerGUIFeaturesWithReporting" { $Features = $ServerGUIFeaturesWithReporting }
"ReportServerFeatures" { $Features = $ReportServerFeatures }
$configData = @{
AllNodes = @(
NodeName = "$($env:computername)"
PSDscAllowPlainTextPassword = $true
Configuration SetupSQL
Import-DSCResource -ModuleName xSQLServer
Node "$($env:computername)"
xSQLServerSetup Install-SQL
SourcePath = $Extract
SourceFolder = "SQL2014"
SetupCredential = $SetupCred
SQLSvcAccount = $SvcAccountCred
AgtSvcAccount = $SvcAccountCred
SQLSysAdminAccounts = $SAArray
UpdateEnabled = "False"
UpdateSource = "$Extract\SQL2014\Updates"
ErrorReporting = "True"
SQLUserDBDir = "M:\Data"
SQLUserDBLogDir = "L:\Log"
SQLTempDBDir = "T:\TempDB"
SQLTempDBLogDir = "T:\TempLog"
SQLBackupDir = "M:\Backup"
InstanceName= "MSSQLSERVER"
Features = $Features
SetupSQL -ConfigurationData $configData
Start-DscConfiguration .\SetupSQL -force -wait -verbose
The environment is prestaged with
$Modules="C:\Program Files\WindowsPowerShell\Modules"
$Extract="C:\InstallSQL"
$ResourceKit="$Extract\DSCRK9.zip"
$WinSXSFiles="$Extract\sxsfiles.zip"
$SQLInstall="$Extract\SQL2014.zip"
mkdir $Extract -force
write-host "$(get-date) Downloading install resources"
start-bitstransfer "http://downloads.yosemite.local/files/application/microsoft/sqlinstall/dscrk9.zip" $ResourceKit
start-bitstransfer "http://downloads.yosemite.local/files/application/microsoft/sqlinstall/sxsfiles.zip" $WinSXSFiles
start-bitstransfer "http://downloads.yosemite.local/files/application/microsoft/sqlinstall/sql2014.zip" $SQLInstall
write-host "$(get-date) Download completed"
Configuration PreStageSQL
Archive Extract-Resource-Kits
Ensure = "Present"
Path = $ResourceKit
Destination = $Extract
Archive Extract-WinSXS-Files
Ensure = "Present"
Path = $WinSXSFiles
Destination = $Extract
DependsOn = "[Archive]Extract-Resource-Kits"
Archive Extract-SQL-Files
Ensure = "Present"
Path = $SQLInstall
Destination = $Extract
DependsOn = "[Archive]Extract-WinSXS-Files"
File Move-Resource-Files
SourcePath = "$Extract\All Resources"
DestinationPath = $Modules
Ensure = "Present"
Type = "Directory"
Recurse = $True
MatchSource = $True
DependsOn = "[Archive]Extract-SQL-Files"
WindowsFeature Install-NET35
Name = "NET-Framework-Core"
Source = "$Extract\sxs"
Ensure = "Present"
DependsOn = "[File]Move-Resource-Files"
PreStageSQL
Start-DscConfiguration .\PreStageSQL -force -wait -verbose
write-host "$(get-date) completed"
The setup errors out at the end with the following messages in the console/event log. The SQL Install itself appears to be complete. I've tried this with UpdateEnabled = "true" as well and it errors at the same location.
So the install appears to complete successfully but powershell reports an error
'C:\InstallSQL\SQL2014\setup.exe' started in process ID 192
VERBOSE: [WINDOWS2012R2]: [[xSQLServerSetup]Install-SQL] Importing function 'NetUse'.
VERBOSE: [WINDOWS2012R2]: [[xSQLServerSetup]Install-SQL] Importing function 'ResolvePath'.
VERBOSE: [WINDOWS2012R2]: [[xSQLServerSetup]Install-SQL] Importing function
'StartWin32Process'.
VERBOSE: [WINDOWS2012R2]: [[xSQLServerSetup]Install-SQL] Importing function
'WaitForWin32ProcessEnd'.
VERBOSE: [WINDOWS2012R2]: [[xSQLServerSetup]Install-SQL] Path:
C:\InstallSQL\SQL2014\setup.exe
VERBOSE: [WINDOWS2012R2]: LCM: [ End Set ] [[xSQLServerSetup]Install-SQL] in 500.8120 seconds.
PowerShell DSC resource MSFT_xSQLServerSetup failed to execute Set-TargetResource functionality with error message:
Set-TargetResouce failed
+ CategoryInfo : InvalidOperation: (:) [], CimException
+ FullyQualifiedErrorId : ProviderOperationExecutionFailure
+ PSComputerName : WINDOWS2012R2
The SendConfigurationApply function did not succeed.
+ CategoryInfo : NotSpecified: (root/Microsoft/...gurationManager:String) [], CimException
+ FullyQualifiedErrorId : MI RESULT 1
+ PSComputerName : WINDOWS2012R2
VERBOSE: Operation 'Invoke CimMethod' complete.
VERBOSE: Time taken for configuration job to complete is 501.455 seconds
With the following errors in event viewer.
Job {5E0C5C09-B7B2-11E4-80B6-000C29F93310} :
Message Set-TargetResouce failed
HResult -2146233087
StackTrack at System.Management.Automation.Interpreter.ThrowInstruction.Run(InterpretedFrame frame)
at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame)
Job {5E0C5C09-B7B2-11E4-80B6-000C29F93310} :
This event indicates that failure happens when LCM is processing the configuration. ErrorId is 0x1. ErrorDetail is The SendConfigurationApply function did not succeed.. ResourceId is [xSQLServerSetup]Install-SQL and SourceInfo is C:\InstallSQL\InstallSQL.ps1::41::9::xSQLServerSetup. ErrorMessage is PowerShell DSC resource MSFT_xSQLServerSetup failed to execute Set-TargetResource functionality with error message: Set-TargetResouce failed .
Job {5E0C5C09-B7B2-11E4-80B6-000C29F93310} :
DSC Engine Error :
Error Message The SendConfigurationApply function did not succeed.
Error Code : 1
However everything from the SQL install summary appears to have been created.
Overall summary:
Final result: Passed
Exit code (Decimal): 0
Start time: 2015-02-18 21:09:08
End time: 2015-02-18 21:17:01
Requested action: Install
Machine Properties:
Machine name: WINDOWS2012R2
Machine processor count: 2
OS version: Windows Server 2012
OS service pack:
OS region: United States
OS language: English (United States)
OS architecture: x64
Process architecture: 64 Bit
OS clustered: No
Product features discovered:
Product Instance Instance ID Feature Language Edition Version Clustered Configured
Package properties:
Description: Microsoft SQL Server 2014
ProductName: SQL Server 2014
Type: RTM
Version: 12
SPLevel: 0
Installation location: C:\InstallSQL\SQL2014\x64\setup\
Installation edition: Enterprise Edition: Core-based Licensing
Product Update Status:
None discovered.
User Input Settings:
ACTION: Install
ADDCURRENTUSERASSQLADMIN: false
AGTSVCACCOUNT: Administrator
AGTSVCPASSWORD: *****
AGTSVCSTARTUPTYPE: Automatic
ASBACKUPDIR: C:\Program Files\Microsoft SQL Server\MSAS12.MSSQLSERVER\OLAP\Backup
ASCOLLATION: Latin1_General_CI_AS
ASCONFIGDIR: C:\Program Files\Microsoft SQL Server\MSAS12.MSSQLSERVER\OLAP\Config
ASDATADIR: C:\Program Files\Microsoft SQL Server\MSAS12.MSSQLSERVER\OLAP\Data
ASLOGDIR: C:\Program Files\Microsoft SQL Server\MSAS12.MSSQLSERVER\OLAP\Log
ASPROVIDERMSOLAP: 1
ASSERVERMODE: MULTIDIMENSIONAL
ASSVCACCOUNT: NT Service\MSSQLServerOLAPService
ASSVCPASSWORD: <empty>
ASSVCSTARTUPTYPE: Automatic
ASSYSADMINACCOUNTS: Administrator
ASTEMPDIR: C:\Program Files\Microsoft SQL Server\MSAS12.MSSQLSERVER\OLAP\Temp
BROWSERSVCSTARTUPTYPE: Disabled
CLTCTLRNAME:
CLTRESULTDIR: C:\Program Files (x86)\Microsoft SQL Server\DReplayClient\ResultDir\
CLTSTARTUPTYPE: Manual
CLTSVCACCOUNT: NT Service\SQL Server Distributed Replay Client
CLTSVCPASSWORD: <empty>
CLTWORKINGDIR: C:\Program Files (x86)\Microsoft SQL Server\DReplayClient\WorkingDir\
COMMFABRICENCRYPTION: 0
COMMFABRICNETWORKLEVEL: 0
COMMFABRICPORT: 0
CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\120\Setup Bootstrap\Log\20150218_210907\ConfigurationFile.ini
CTLRSTARTUPTYPE: Manual
CTLRSVCACCOUNT: NT Service\SQL Server Distributed Replay Controller
CTLRSVCPASSWORD: <empty>
CTLRUSERS:
ENABLERANU: false
ENU: true
ERRORREPORTING: true
FEATURES: SQLENGINE, REPLICATION, FULLTEXT, DQ, AS, DQC, CONN, IS, BC, SDK, BOL, SSMS, ADV_SSMS, DREPLAY_CTLR, DREPLAY_CLT, SNAC_SDK, MDS
FILESTREAMLEVEL: 0
FILESTREAMSHARENAME: <empty>
FTSVCACCOUNT: NT Service\MSSQLFDLauncher
FTSVCPASSWORD: <empty>
HELP: false
IACCEPTSQLSERVERLICENSETERMS: true
INDICATEPROGRESS: false
INSTALLSHAREDDIR: C:\Program Files\Microsoft SQL Server\
INSTALLSHAREDWOWDIR: C:\Program Files (x86)\Microsoft SQL Server\
INSTALLSQLDATADIR: <empty>
INSTANCEDIR: C:\Program Files\Microsoft SQL Server\
INSTANCEID: MSSQLSERVER
INSTANCENAME: MSSQLSERVER
ISSVCACCOUNT: NT Service\MsDtsServer120
ISSVCPASSWORD: <empty>
ISSVCSTARTUPTYPE: Automatic
MATRIXCMBRICKCOMMPORT: 0
MATRIXCMSERVERNAME: <empty>
MATRIXNAME: <empty>
NPENABLED: 0
PID: *****
QUIET: true
QUIETSIMPLE: false
ROLE:
RSINSTALLMODE: DefaultNativeMode
RSSHPINSTALLMODE: DefaultSharePointMode
RSSVCACCOUNT: <empty>
RSSVCPASSWORD: <empty>
RSSVCSTARTUPTYPE: Automatic
SAPWD: <empty>
SECURITYMODE: <empty>
SQLBACKUPDIR: C:\Backup
SQLCOLLATION: SQL_Latin1_General_CP1_CI_AS
SQLSVCACCOUNT: Administrator
SQLSVCPASSWORD: *****
SQLSVCSTARTUPTYPE: Automatic
SQLSYSADMINACCOUNTS: Administrator, Administrator
SQLTEMPDBDIR: C:\TempDB
SQLTEMPDBLOGDIR: C:\TempLog
SQLUSERDBDIR: C:\Data
SQLUSERDBLOGDIR: C:\Log
SQMREPORTING: false
TCPENABLED: 1
UIMODE: Normal
UpdateEnabled: true
UpdateSource: C:\InstallSQL\SQL2014\Updates
USEMICROSOFTUPDATE: false
X86: false
Configuration file: C:\Program Files\Microsoft SQL Server\120\Setup Bootstrap\Log\20150218_210907\ConfigurationFile.ini
Detailed results:
Feature: Management Tools - Complete
Status: Passed
Feature: Client Tools Connectivity
Status: Passed
Feature: Client Tools SDK
Status: Passed
Feature: Client Tools Backwards Compatibility
Status: Passed
Feature: Management Tools - Basic
Status: Passed
Feature: Database Engine Services
Status: Passed
Feature: Data Quality Services
Status: Passed
Feature: Full-Text and Semantic Extractions for Search
Status: Passed
Feature: SQL Server Replication
Status: Passed
Feature: Master Data Services
Status: Passed
Feature: Distributed Replay Client
Status: Passed
Feature: Distributed Replay Controller
Status: Passed
Feature: Integration Services
Status: Passed
Feature: Data Quality Client
Status: Passed
Feature: Analysis Services
Status: Passed
Feature: SQL Browser
Status: Passed
Feature: Documentation Components
Status: Passed
Feature: SQL Writer
Status: Passed
Feature: SQL Client Connectivity
Status: Passed
Feature: SQL Client Connectivity SDK
Status: Passed
Feature: Setup Support Files
Status: Passed
Rules with failures:
Global rules:
Scenario specific rules:
Rules report file: C:\Program Files\Microsoft SQL Server\120\Setup Bootstrap\Log\20150218_210907\SystemConfigurationCheck_Report.htm
The problem I run into is when I attempt to invoke this particular script from CI tools, the error triggers a false exit of the provision. Additionally the error seems to happen before I can install updates.
I am using SQL2014. Does anybody have any ideas?Hi jrv,
1. Unsure there is almost 0 documentation of this methodology on the internet.
2. Remembered, which is why I posted. I'm hoping somebody else has run into this road block before.
3. As far as debugging the script, I have tried all variety of options in the form of flags that I can think of. I haven't had the time to debug the actual xSQLServer resource.
4. What do you mean? Those are the errors. :) These are the only things the system has presented.
I'd like to continue using the DSC methodology as it falls inline with our current automation patterns. However it seems using a pre-built ini or a flagged install might have to be the primary method. This is disappointing because it means that
puppet still does a better job of configuring SQL than DSC, it's also disappointing to hear the suggestion that the latest DSC resources aren't being developed for the latest products.
Nathan Julsrud -
I recently ran monolingual and removed all but the intel-64 bit architectures. Now my iphoto (along with Idvd, garage band, imovie) will not open. Here is the message that I get.
Process: iPhoto [3543]
Path: /Applications/iPhoto.app/Contents/MacOS/iPhoto
Identifier: com.apple.iPhoto
Version: ??? (???)
Build Info: iPhotoProject-4750000~1
Code Type: X86 (Native)
Parent Process: launchd [109]
Date/Time: 2011-06-10 21:48:59.821 -0500
OS Version: Mac OS X 10.6.7 (10J869)
Report Version: 6
Interval Since Last Report: -4164908 sec
Crashes Since Last Report: 8
Per-App Crashes Since Last Report: 11
Anonymous UUID: 45357CCD-011B-482E-A2EA-CF42096F1321
Exception Type: EXC_BREAKPOINT (SIGTRAP)
Exception Codes: 0x0000000000000002, 0x0000000000000000
Crashed Thread: 0
Dyld Error Message:
Library not loaded: /Library/Frameworks/iLifeSlideshow.framework/Versions/A/iLifeSlideshow
Referenced from: /Applications/iPhoto.app/Contents/MacOS/iPhoto
Reason: no suitable image found. Did find:
/Library/Frameworks/iLifeSlideshow.framework/Versions/A/iLifeSlideshow: mach-o, but wrong architecture
/Library/Frameworks/iLifeSlideshow.framework/Versions/A/iLifeSlideshow: mach-o, but wrong architecture
Binary Images:
0x8fe00000 - 0x8fe4162b dyld 132.1 (???) <1C06ECD9-A2D7-BB10-AF50-0F2B598A7DEC> /usr/lib/dyld
Model: iMac10,1, BootROM IM101.00CC.B00, 2 processors, Intel Core 2 Duo, 3.06 GHz, 4 GB, SMC 1.53f13
Graphics: ATI Radeon HD 4670, ATI Radeon HD 4670, PCIe, 256 MB
Memory Module: global_name
AirPort: spairport_wireless_card_type_airport_extreme (0x168C, 0x8F), Atheros 9280: 2.1.14.5
Bluetooth: Version 2.4.0f1, 2 service, 19 devices, 1 incoming serial ports
Network Service: Built-in Ethernet, Ethernet, en0
Serial ATA Device: ST31000528ASQ, 931.51 GB
Serial ATA Device: OPTIARC DVD RW AD-5680H
USB Device: USB2.0 Hub, 0x05e3 (Genesys Logic, Inc.), 0x0608, 0x24300000
USB Device: Built-in iSight, 0x05ac (Apple Inc.), 0x8502, 0x24400000
USB Device: External HDD, 0x1058 (Western Digital Technologies, Inc.), 0x0901, 0x26400000
USB Device: Internal Memory Card Reader, 0x05ac (Apple Inc.), 0x8403, 0x26500000
USB Device: IR Receiver, 0x05ac (Apple Inc.), 0x8242, 0x04500000
USB Device: BRCM2046 Hub, 0x0a5c (Broadcom Corp.), 0x4500, 0x06100000
USB Device: Bluetooth USB Host Controller, 0x05ac (Apple Inc.), 0x8215, 0x06110000Please let me know when you find a fix. I did the same thing and have tried every suggestion I can find online. The message I get is...
Process: iPhoto [4991]
Path: /Applications/iPhoto.app/Contents/MacOS/iPhoto
Identifier: com.apple.iPhoto
Version: ??? (???)
Build Info: iPhotoProject-6070000~1
Code Type: X86 (Native)
Parent Process: launchd [142]
Date/Time: 2011-06-13 23:39:38.485 +1200
OS Version: Mac OS X 10.6.7 (10J869)
Report Version: 6
Interval Since Last Report: -1643976 sec
Crashes Since Last Report: 35
Per-App Crashes Since Last Report: 12
Anonymous UUID: D4811036-EA8D-479D-8D9F-11E2FC8F6D4C
Exception Type: EXC_BREAKPOINT (SIGTRAP)
Exception Codes: 0x0000000000000002, 0x0000000000000000
Crashed Thread: 0
Dyld Error Message:
Library not loaded: /Library/Frameworks/iLifeSlideshow.framework/Versions/A/iLifeSlideshow
Referenced from: /Applications/iPhoto.app/Contents/MacOS/iPhoto
Reason: no suitable image found. Did find:
/Library/Frameworks/iLifeSlideshow.framework/Versions/A/iLifeSlideshow: mach-o, but wrong architecture
/Library/Frameworks/iLifeSlideshow.framework/Versions/A/iLifeSlideshow: mach-o, but wrong architecture
Binary Images:
0x8fe00000 - 0x8fe4162b dyld 132.1 (???) <1C06ECD9-A2D7-BB10-AF50-0F2B598A7DEC> /usr/lib/dyld
Model: MacBookPro7,1, BootROM MBP71.0039.B0B, 2 processors, Intel Core 2 Duo, 2.4 GHz, 4 GB, SMC 1.62f6
Graphics: NVIDIA GeForce 320M, NVIDIA GeForce 320M, PCI, 256 MB
Memory Module: global_name
AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x8D), Broadcom BCM43xx 1.0 (5.10.131.36.9)
Bluetooth: Version 2.4.0f1, 2 service, 19 devices, 1 incoming serial ports
Network Service: AirPort, AirPort, en1
Serial ATA Device: Hitachi HTS545025B9SA02, 232.89 GB
Serial ATA Device: MATSHITADVD-R UJ-898, 3.5 GB
USB Device: Internal Memory Card Reader, 0x05ac (Apple Inc.), 0x8403, 0x26100000
USB Device: Built-in iSight, 0x05ac (Apple Inc.), 0x8507, 0x24600000
USB Device: BRCM2046 Hub, 0x0a5c (Broadcom Corp.), 0x4500, 0x06600000
USB Device: Bluetooth USB Host Controller, 0x05ac (Apple Inc.), 0x8213, 0x06610000
USB Device: IR Receiver, 0x05ac (Apple Inc.), 0x8242, 0x06500000
USB Device: Apple Internal Keyboard / Trackpad, 0x05ac (Apple Inc.), 0x0236, 0x06300000
I have reinstalled Mac OSX 10.6.3 and done the updates from there.
I have reinstalled ilife 11 from disk and done the updates.
I have deleted all the suggested files and then redone install and updates.
I have tried just reinstalling iphoto and doing updates.
Is there any way to get a replacement - /Library/Frameworks/iLifeSlideshow.framework/Versions/A/iLifeSlideshow
file with the right architecture? -
Performanc​e of Modbus using DSC Shared Variables
I'm fairly new at using Modbus with LabVIEW. Out of the roughly dozen tools and API's that can be used, for one project I'm working on I decided to try using Shared Variables aliased to Modbus registers in the project, which is a DSC tool. It seemed like a clever way to go. I've used Shared Variables in the past, though, and am aware of some of the issues surrounding them, especially when the number of them begins to increase. I'll only have about 120 variables, so I don't think it will be too bad, but I'm beginning to be a bit concerned...
The way I started doing this was to create a new shared variable for every data point. What I've noticed since then is that there is a mechanism for addressing multiple registers at once using an array of values. (Unfortunately, even if I wanted to use the array method, I probably couldn't. The Modbus points I am interfacing to are for a custom device, and the programmer didn't bother using consecutive registers...) But in any case, I was wondering what the performance issues might be surrounding this API.
I'm guessing that:
1) All the caveates of shared variables apply. These really are shared variables, it's only that DSC taught the SV Engine how to go read them. Is that right?
And I'm wondering:
2) Is there any performance improvement for reading an array of consecutive variables rather than reading each variable individually?
3) Are there any performance issues above what shared variables normally have, when using Modbus specifically? (E.g. how often can you read a few hundred Modbus points from the same device?)
Thanks,
DaveT
David Thomson Original Code Consulting
www.originalcode.com
National Instruments Alliance Program Member
Certified LabVIEW Architect
There are 10 kinds of people: those who understand binary, and those who don't.
Solved!
Go to Solution.Anna,
Thanks so much for the reply. That helps a lot.
I am still wondering about one thing, though. According to the documentation, the "A" prefix in a Modbus DSC address means that it will return an array of data, whereas something like the F prefix is for a single precision float. When I create a channel, I pick the F300001 option, and the address that is returned is a range: F300001 - F365534. The range would imply that a series of values will be returned, e.g. an array. I always just delete the range and enter a single address. Is that the intention? Does it return the range just so you know the range of allowed addresses?
OK, I'm actually wondering two things. Is there a reason why the DSC addresses start with 1, e.g. F300001, instead of 0, like F300000? For the old Modbus API from LV7, one of the devices we have that uses that API has a register at 0. How would that be handled in DSC?
Thanks,
Dave
David Thomson Original Code Consulting
www.originalcode.com
National Instruments Alliance Program Member
Certified LabVIEW Architect
There are 10 kinds of people: those who understand binary, and those who don't. -
Is it wise to keep the Nikon camera files "DSC's" after downloading them and converting to DNG files via Adobe converter for lightroom use. In other words do the DNG files have all the raw data I would ever need in processing or should I save the camera's DSC files?
DNG files do not contain some metadata supplied by the camera, which can be used by the manufacturer's software. Thus, if you don't keep the original Raw photo, you will lose this information.
If your 1000% sure you're never going to use the manufacturer's software, then this isn't a problem. But who can be sure what software you will be using 10 years from now? -
What architecture is best for accurate data logging
Hello,
I'm desiging some LabVIEW code for a standard DAQ application that is required to plot about 100 variables onto the screen on several different graphs and numeric indicators, as well as perform some simple feedback control and log data into a file once a second.
I've done this before, and used a simple state machine architecture, where one state takes care of my logging, and I have a timer vi in there that countsdown 1 second and then writes to file. However, this method makes me miss a second every once in a while.
I started looking into the producer/consumer architecture as a possible remedy for this. Because I hear it's good for running two things at different times, so I"ll have my quicker loop handling data acquistion, plots and feedback control, and my slower logging loop, executing once a second. But I don't see how to implement this
questions:
1. is a simple producer consumer the right topology for my application?
2. when I create my queue do I create it a 100 element array (my data for logging) and then enqueue that in my producer loop from my data acquistion, then pass that to the logging VI.... this seems wrong to me, cause I'm going to be enqueing alot of 100 element arrays... and will be de-queing them slowly at once a second..
3. How do I trigger my consumer loop to execute every second, should I set it up as a timed while loop? or should something from the producer loop tell it to?
I'm sure this is a pretty standard thing to do, I'm jus tnot sure how to implment the correct architecture.
much thanks!Ok, let's try this. I've put together an example that should do what you need. I put notes in the block diagram, but essentially it runs data in a while loop at whatever execution rate you specify, then sends the data to another graph (or in your case, a log) every one second. Basically, I've used a 100ms execution rate for the while loop, then every 10th time (you can change this if you want), it sends a boolean 'true' to a case structure within the while loop that contains the enqueue element. The graphs that I included show that it does indeed add a new point to the second graph once a second while the first one is adding a point every 100ms.
The actual wiring of this Vi could be cleaner for sure, but it was a quick and dirty example I put together. Hopefully this will help you accomplish what you're trying to do.
Regards,
Austin S.
National Instruments
Academic Field Engineer
Attachments:
Enqueue array 2.vi 28 KB -
Single Sign on in a 3 tier architecture between SAP Netweaver CE and R/3
Hi All,
I am trying to implement SSO using SAP logon tickets in a 3 tier architecture between NW CE and R/3. But so far I have not been able to crack this.
Let me describe the scenario in detail:
We have two Java EE applications on Netweaver CE7.2 Application Server:
1. UI: Just handles all the UI logic : js, jsp, css, html, extjs .It calls the Business Layer Java EE application to get data from back-end systems.
2. Business Layer: Calls R/3 SOAP services does some processing on them and exposes the data back to the UI via a Restful JSON service (implemented using Java Spring framework)
Both UI and Business Layer Java EE applications define login modules to be used for SAP logon tickets. So the architecture is like this:
UI-RESTfull-->Business LayerSOAP->ABAP R/3
So ideally when the UI link is clicked it prompts the user for authentication (uses CE UME) and then the UI applications calls the Business Layer which then calls R/3. This unfortunately doesn't work. The authentication between UI and Business Layer Application fails.
However if you remove the Business Layer Java EE application and call the SOAP service directly from the UI. SAP logon tickets starts working.
So I have been able to make SAP logon tickets work with the following 2 tier architecture:
UI---SOAP--->R/3
So my Question is:
Is there a way to use SAP logon tickets in a 3 tier architecture between NW CE and R/3 (For the scenario described above)? Any help/pointers/documentation links would be greatHey Martin,
To enable SSO I updated web.xml and engine-j2ee.xml for both UI and Business Layer application according to the login module stacks defined (the first one) in the following link:
http://help.sap.com/saphelp_NW70EHP1/helpdata/en/04/120b40c6c01961e10000000a155106/content.htm
Initially both UI and Business Layer had the same entries for web.xml and engine.xml. But since this was not working I did all kinds of testing. For UI i used FORM based authentication and for Business Layer I was using "BASIC" authentication.
I tested the following Scenarios:
1. Without any changes to the above XML files: The Business layer rejects any requests from the UI . I checked the Browser and "MYSAPSSO2" cookie was created. Somehow UI doesnt use this to call Business Layer. Or the Business Layer rejects the token itself
2. I removed authentication from the Business Layer application (Web.xml) keeping the UI same: The call went to R3 but returned a "UnAuthorized" error back. In this case also at the browser level "MYSAPSSO2" token was created but was not used by the business layer to call R3.
3. The did all sorts of permutation and combination with the sample login modules provided (See link above) on both UI and Business Layer application . Nothing worked. All combinations led to two results which were the same as 1 and 2
It seems all this is happening because of another application in between UI and R3.
Hope this Clarifies.
Thanks,
Dhannajay -
Books about MVVM, architecture, design patterns for Windows Phone 8.1
Hi,
I'm looking for a book or books (or other resources) that explain how to develop an app with a proper architecture. I mean what each layer (bussines layer, data layer, network access) should do and how should it look like. I'm also looking for a book about
MVVM.
Right now I'm struggling with how to create a layer for network communication - how to separate classes for requests and responses, how to manage requests and create some queue of requests and also to provide some way to cancel them when they are no longer
needed, how to work with servers that use some level of security (cookies, certificates etc.).
Another thing is caching - how to design a short-term cache or a persistant cache (database), what technologies I can use etc.
Last thing that I'm struggling with is also naming. How to name classes in those layers, e.g. to distinguish between classes mapping data from some ORM database, for mapping on JSON in network communication. etc.
I hope you got the idea :)
Thanks.Currently, I don't find a book about MVVM pattern for Windows Phone 8.1, but I think MSDN and some blogs have some useful samples and conceptions: http://msdn.microsoft.com/en-us/library/windows/apps/jj883732.aspx
http://channel9.msdn.com/Series/Windows-Phone-8-1-Development-for-Absolute-Beginners
And I think your question includes too much scopes, maybe you need to split it into some blocks and get help in the related forum
Best Regards,
Please remember to mark the replies as answers if they help
Maybe you are looking for
-
Hey all, I have a late 2013 iMac on 10.9.4. Just got it a few days ago and tried to install my copy of 8.1 on it. After taking a while to create a bootable USB disk, I partitioned ~470GBs through Boot Camp Assistant to use as a Windows partition. Lon
-
Possibly a dumb question: If Mac OSX (10.4.11 in this case) boots from a mirrored RAID (2 drives), is there any additional protection from routine disk directory errors? Again, I'm not asking about the physical failure of one of the drives, but from
-
Hi, read through some forum threads but don't see any direct hints to help my problem (yet!). My computer shutdown without first manually stopping the local instance in the MMC - thanks to a suddenly failing battery. Once I starting the computer agai
-
How can I get firefox v3.X?
I mistakenly downloaded and tried to install v4, but it's not supported on my old OS 10.5.8. v4 overwrote firefox v3.x and I can't seem to locate an active download for firefox that will work on my OS.
-
Twelve hours to convert to H.264 - normal?
It just took my Dual 2.5ghz G5, 4gb ram, TWELVE HOURS to convert a 90-minute feature film to an H.264 mp4, using the Best Quality Multipass option. Is this normal? Nothing else was running except Transmission (torrent programme) - I left it over nigh