DateTime split
Date attribute of Source schema in the mapping has two types of formats
23/12/2012 12:13:12 or
23/12/2001 12:12
This should split into date(in the format of 23122012) and time(in the format of 121312 or 1213) and should map to two of the
attributes in the destination schema.
One attribute of source schema should map to two attributes(one for date and the other for time) of destination schema
Use a scripting functiods with Inline-C# script as following
to get Date (as 23122012):
public string DateFormatter(string dt)
return Convert.ToDateTime(dt).ToString("ddMMyyyy");
ToGetTime(121312 ):
public string DateFormatter(string dt)
return Convert.ToDateTime(dt).ToString("hms");
If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.
Similar Messages
-
Export-csv only generating the output only for single server
Hi Team,
From the below script I'm unable to generate the single output file for all the condition. The script only giving the last server output, It's skipping the all the other servers in the file.
$ScriptBlock = {
param
$Server,
$ExportCSV
$Counters= import-csv "G:\testcounter.csv"
foreach($Counter in $Counters)
$ObjectName=$Counter.ObjectName
$CounterName=$Counter.CounterName
$InstanceName=$Counter.InstanceName
$Result=Get-Counter -Counter "\\$server\$ObjectName($InstanceName)\$CounterName"
$CounterSamples = $Result | % {$_.CounterSamples}
$MasterArray = @()
foreach ($CounterSample in $CounterSamples)
$TempArray = @()
$TempArray = "" | Select Server, ObjectName, CounterName, InstanceName, SampleValue, DateTime
$Split = $CounterSample.Path.Remove(0,2)
$Split = $Split.Split("\")
$TempArray.Server = $Split[0]
$TempArray.ObjectName = $Split[1].Split("(")[0]
$TempArray.CounterName = $Split[2]
$TempArray.InstanceName = $CounterSample.InstanceName
$TempArray.SampleValue = $CounterSample.CookedValue
$TempArray.DateTime = $CounterSample.TimeStamp.ToString("yyyy-MM-dd HH:mm:ss")
$MasterArray += $TempArray
$MasterArray | Export-Csv $ExportCSV -NoType
$Servers = import-csv "G:\testcounter.csv"
foreach ($Server in $Servers)
$server=$server.server
If (Test-Connection -quiet -computer $server){
$ExportCSV = "G:\PerformaneData.csv"
Start-Job -ScriptBlock $ScriptBlock -ArgumentList @($server, $ExportCSV)Hi RatheeshAV,
In addition, to export the result to csv file, please also try to wait for all the jobs to complete then retrieve all the data and write it to a file in one step:
$ScriptBlock = {
param ($Server)
#SCRIPT
$MasterArray }
$Servers = import-csv "G:\testcounter.csv"
Get-Job | Remove-Job
$jobs=@()
foreach ($Server in $Servers){
$server=$server.server
If(Test-Connection -quiet -computer $server){
Write-Host $Server -ForegroundColor green
$ExportCSV = "G:\PerformaneData.csv"
$job+=Start-Job -ScriptBlock $ScriptBlock -ArgumentList $server
$jobs | Wait-Job
$jobs | Receive-job | Export-CSV 'd:\temp.csv' -NoTypeInformation
Refer to:
PS3 Export-CSV -Append from
multiple instances to the same csv file
If there is anything else regarding this issue, please feel free to post back.
Best Regards,
Anna Wang -
How to split the string by datetime in sql
Hi,
How to split the string by datetime in sql, I've a table with comments column stores comments by datetime, while selecting I want to split and show as in rows by each jobref.
can anyone help me in this please.
Thanks,declare @callcentre table (comments varchar(max),lbiref varchar(200))
insert into @callcentre
select '(28/10/2014 14:56:14) xyz ..... call logged (28/10/2014 14:56:58) xyz ..... call updated (28/10/2014 14:57:41)xyz ..... call updated','Vi2910201'
insert into @callcentre
select '(29/10/2014 14:56:14) xyz ..... call logged (29/10/2014 14:56:58) xyz ..... call updated (29/10/2014 14:57:41)xyz ..... call updated','Vi2910202'
insert into @callcentre
select '(30/10/2014 14:56:14) xyz ..... call logged (30/10/2014 14:56:58) xyz ..... call updated
output:
1) 28/10/2014 14:56:14, (28/10/2014 14:56:14) xyz ..... call logged ,'Vi2910201'
2) 28/10/2014 14:56:58 ,(28/10/2014 14:56:58) xyz ..... call updated ,'Vi2910201'
3) 28/10/2014 14:57:41, (28/10/2014 14:57:41)xyz ..... call updated,'Vi2910201'
4) 28/10/2014 14:56:14, (28/10/2014 14:56:14) xyz ..... call logged ,'Vi2910202'
5) 28/10/2014 14:56:58 ,(28/10/2014 14:56:58) xyz ..... call updated ,'Vi2910202'
6) 28/10/2014 14:57:41, (28/10/2014 14:57:41)xyz ..... call updated,'Vi2910202'
7) 28/10/2014 14:56:14, (28/10/2014 14:56:14) xyz ..... call logged ,'Vi2910203'
8) 28/10/2014 14:56:58 ,(28/10/2014 14:56:58) xyz ..... call updated ,'Vi2910203'
Thanks,
See this illustration
declare @callcentre table (comments varchar(max),lbiref varchar(200))
insert into @callcentre
select '(28/10/2014 14:56:14) xyz ..... call logged (28/10/2014 14:56:58) xyz ..... call updated (28/10/2014 14:57:41)xyz ..... call updated','Vi2910201'
insert into @callcentre
select '(29/10/2014 14:56:14) xyz ..... call logged (29/10/2014 14:56:58) xyz ..... call updated (29/10/2014 14:57:41)xyz ..... call updated','Vi2910202'
insert into @callcentre
select '(30/10/2014 14:56:14) xyz ..... call logged (30/10/2014 14:56:58) xyz ..... call updated','Vi2910203'
SELECT LEFT(p.u.value('.[1]','varchar(max)'),CHARINDEX(')',p.u.value('.[1]','varchar(max)'))-1) AS [Date],
'(' + p.u.value('.[1]','varchar(max)') AS comments,
lbiref
FROM
SELECT lbiref,CAST('<Root>' + STUFF(REPLACE(comments,'(','</Data><Data>'),1,7,'') + '</Data></Root>' AS XML) AS x
FROM @callcentre c
)t
CROSS APPLY x.nodes('/Root/Data')p(u)
and the output
Date comments lbiref
28/10/2014 14:56:14 (28/10/2014 14:56:14) xyz ..... call logged Vi2910201
28/10/2014 14:56:58 (28/10/2014 14:56:58) xyz ..... call updated Vi2910201
28/10/2014 14:57:41 (28/10/2014 14:57:41)xyz ..... call updated Vi2910201
29/10/2014 14:56:14 (29/10/2014 14:56:14) xyz ..... call logged Vi2910202
29/10/2014 14:56:58 (29/10/2014 14:56:58) xyz ..... call updated Vi2910202
29/10/2014 14:57:41 (29/10/2014 14:57:41)xyz ..... call updated Vi2910202
30/10/2014 14:56:14 (30/10/2014 14:56:14) xyz ..... call logged Vi2910203
30/10/2014 14:56:58 (30/10/2014 14:56:58) xyz ..... call updated Vi2910203
Please Mark This As Answer if it solved your issue
Please Mark This As Helpful if it helps to solve your issue
Visakh
My MSDN Page
My Personal Blog
My Facebook Page -
EZVPN public internet split tunnel with dialer interface
I have a job on where I need to be able to use EZVPN with split tunnel but still have access to an external server from the corporate network as the external server will only accept connections from the corporate public IP address.
So I have not only included the corporate C class in the interesting traffic but also the IP address of the external server.
So all good so far, traffic for the corporate network goes down the tunnel as well as the IP address for the external server.
Now comes the problem, I am trying to send the public IP traffic for the external server out of the corporate network into the public internet but it just drops and does not get back out the same interface into the internet.
I checked out this procedure and it did not help as the route map counters do not increase with my attempt to reach the external router.
http://www.cisco.com/c/en/us/support/docs/security/vpn-client/71461-router-vpnclient-pi-stick.html
And to just test the process, I removed the split tunnel and just have everything going down the tunnel so I can test with any web site. I also have a home server on the network that is reached so I can definitly reach into the network at home which is the test for the corporate network I am trying to reach.
Its a cisco 870 router and here is the config
Router#sh run
Building configuration...
Current configuration : 4617 bytes
version 12.4
no service pad
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
hostname Router
boot-start-marker
boot-end-marker
logging message-counter syslog
enable secret 5 *************************
enable password *************************
aaa new-model
aaa authentication login default local
aaa authentication login ciscocp_vpn_xauth_ml_1 local
aaa authorization exec default local
aaa authorization network ciscocp_vpn_group_ml_1 local
aaa session-id common
dot11 syslog
ip source-route
ip dhcp excluded-address 192.168.1.1
ip dhcp excluded-address 192.168.1.2
ip dhcp excluded-address 192.168.1.3
ip dhcp excluded-address 192.168.1.4
ip dhcp excluded-address 192.168.1.5
ip dhcp excluded-address 192.168.1.6
ip dhcp excluded-address 192.168.1.7
ip dhcp excluded-address 192.168.1.8
ip dhcp excluded-address 192.168.1.9
ip dhcp excluded-address 192.168.1.111
ip dhcp pool myDhcp
network 192.168.1.0 255.255.255.0
dns-server 139.130.4.4
default-router 192.168.1.1
ip cef
ip inspect name myfw http
ip inspect name myfw https
ip inspect name myfw pop3
ip inspect name myfw esmtp
ip inspect name myfw imap
ip inspect name myfw ssh
ip inspect name myfw dns
ip inspect name myfw ftp
ip inspect name myfw icmp
ip inspect name myfw h323
ip inspect name myfw udp
ip inspect name myfw realaudio
ip inspect name myfw tftp
ip inspect name myfw vdolive
ip inspect name myfw streamworks
ip inspect name myfw rcmd
ip inspect name myfw isakmp
ip inspect name myfw tcp
ip name-server 139.130.4.4
username ************************* privilege 15 password 0 *************************
crypto isakmp policy 1
encr 3des
authentication pre-share
group 2
crypto isakmp client configuration group HomeFull
key *************************
dns 8.8.8.8 8.8.8.4
pool SDM_POOL_1
include-local-lan
netmask 255.255.255.0
crypto isakmp profile ciscocp-ike-profile-1
match identity group HomeFull
client authentication list ciscocp_vpn_xauth_ml_1
isakmp authorization list ciscocp_vpn_group_ml_1
client configuration address respond
virtual-template 3
crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac
crypto ipsec profile CiscoCP_Profile1
set security-association idle-time 1740
set transform-set ESP-3DES-SHA
set isakmp-profile ciscocp-ike-profile-1
crypto ctcp port 10000
archive
log config
hidekeys
interface Loopback10
ip address 10.0.0.1 255.255.255.0
ip nat inside
ip virtual-reassembly
interface ATM0
no ip address
no ip redirects
no ip unreachables
no ip proxy-arp
ip flow ingress
no atm ilmi-keepalive
interface ATM0.1 point-to-point
description TimsInternet
ip flow ingress
ip policy route-map VPN-Client
pvc 8/35
encapsulation aal5mux ppp dialer
dialer pool-member 3
interface FastEthernet0
interface FastEthernet1
interface FastEthernet2
interface FastEthernet3
interface Virtual-Template3 type tunnel
ip unnumbered Dialer3
tunnel mode ipsec ipv4
tunnel protection ipsec profile CiscoCP_Profile1
interface Vlan1
ip address 192.168.1.1 255.255.255.0
no ip redirects
no ip unreachables
no ip proxy-arp
ip inspect myfw in
ip nat inside
ip virtual-reassembly
no ip route-cache cef
no ip route-cache
ip tcp adjust-mss 1372
no ip mroute-cache
hold-queue 100 out
interface Dialer0
no ip address
interface Dialer3
ip address negotiated
ip access-group blockall in
no ip redirects
no ip unreachables
no ip proxy-arp
ip mtu 1492
ip flow ingress
ip nat outside
ip virtual-reassembly
encapsulation ppp
ip tcp header-compression
ip policy route-map VPN-Client
no ip mroute-cache
dialer pool 3
dialer-group 1
no cdp enable
ppp chap hostname *************************@direct.telstra.net
ppp chap password 0 *************************
ip local pool SDM_POOL_1 10.0.0.10 10.0.0.100
ip forward-protocol nd
ip route 0.0.0.0 0.0.0.0 Dialer3
ip http server
ip http authentication local
no ip http secure-server
ip nat inside source list 101 interface Dialer3 overload
ip access-list extended VPN-OUT
permit ip 10.0.0.0 0.0.0.255 any
ip access-list extended blockall
remark CCP_ACL Category=17
permit udp any any eq non500-isakmp
permit udp any any eq isakmp
permit esp any any
permit ahp any any
permit tcp any any eq 10000
deny ip any any
access-list 101 permit ip 192.168.1.0 0.0.0.255 any
access-list 101 permit ip 10.0.0.0 0.0.0.255 any
dialer-list 1 protocol ip permit
route-map VPN-Client permit 10
match ip address VPN-OUT
set ip next-hop 10.0.0.2
control-plane
line con 0
no modem enable
line aux 0
line vty 0 4
password cisco
scheduler max-task-time 5000
end
Router#exit
Connection closed by foreign host.Thanks for the response.
Not sure how that would help as I can connect into the internal network just fine, but I want to hairpin back out the interface and surf the internet from the VPN client. The policy route map makes the L10 the next hop and it has NAT. -
VPN and Split-DNS problem connecting 851 to 3030 Concentrator
I have configured a Cisco 851 (IOS 12.4(11)T) to connect to the Cisco 3000 Concentrator (v4.72G). I am having multiple problems:
1. On the concentrator I have specified multiple domain names for split DNS "hq.portablesunlimited.com,hq.cellfonestore.com". However I see only the first name created for the dns views.
2. We have a static WAN IP address with a fixed DNS Server name given by our ISP. I am using the same DNS name on the client PCs connected to the 851. I am able to resolve any external names for e.g. "www.google.com". When I try to resolve a DNS address (Split-DNS) for e.g. server.hq.portablesunlimited.com, it fails to resolve the address. I tried to specify the address of 815 (10.0.0.1) as the DNS server for the clients, in this case the clients do not resolve any address. However if I go to the 851 console and ping say "www.yahoo.com" it works and then I can resolve that address "www.yahoo.com" from the client PCs also.
I don't have any firewall or NAT enabled on the 851.
Here is the 851 config file:
version 12.4
no service pad
service timestamps debug datetime msec
service timestamps log datetime msec
service password-encryption
hostname firewall
boot-start-marker
boot-end-marker
logging buffered 51200 warnings
enable secret 5 xxxxxxxxxxxx
no aaa new-model
clock timezone PCTime -5
clock summer-time PCTime date Apr 6 2003 2:00 Oct 26 2003 2:00
no ip dhcp use vrf connected
ip dhcp excluded-address 10.220.1.1 10.220.1.99
ip dhcp excluded-address 10.220.1.201 10.220.1.254
ip dhcp pool sdm-pool1
import all
network 10.220.1.0 255.255.255.0
dns-server 129.x.x.80
default-router 10.220.1.1
ip cef
ip domain name mydomain.com
ip name-server 129.x.x.80
crypto pki trustpoint TP-self-signed-3072999871
enrollment selfsigned
subject-name cn=IOS-Self-Signed-Certificate-3072999871
revocation-check none
rsakeypair TP-self-signed-3072999871
crypto ipsec client ezvpn VPN1
connect auto
group xyz key xyz
mode network-extension
peer x.x.x.x
username xyz password xyz
xauth userid mode local
interface FastEthernet0
interface FastEthernet1
interface FastEthernet2
interface FastEthernet3
interface FastEthernet4
description $FW_OUTSIDE$$ES_WAN$
ip address 129.34.x.x.255.255.240
duplex auto
speed auto
crypto ipsec client ezvpn VPN1
interface Vlan1
description $ETH-SW-LAUNCH$$INTF-INFO-HWIC 4ESW$$ES_LAN$$FW_INSIDE$
ip address 10.220.1.1 255.255.255.0
ip tcp adjust-mss 1452
crypto ipsec client ezvpn VPN1 inside
ip route 0.0.0.0 0.0.x.x.34.7.82
ip http server
ip http authentication local
ip http secure-server
ip http timeout-policy idle 60 life 86400 requests 10000
ip dns view ezvpn-internal-view
domain name-server 10.128.1.10
ip dns view-list ezvpn-internal-viewlist
view ezvpn-internal-view 10
restrict name-group 1
view default 20
ip dns name-list 1 permit HQ.PORTABLESUNLIMITED.COM
ip dns server view-group ezvpn-internal-viewlist
no cdp run
endSomeone please reply to the post as this issue is critical for us to decide the purchase of the above equipment for our 40 remote locations.
Thanks
Srikant -
Invoice A/R Payment Split Transaction into Several GL Accounts
Hi,
I got following message from SAP Support:
The SplitTransaction property is not included in product develop plan by now.
By SAP Note 1028874, we would like to ask you to post your requirement in our SAP Business One Product Development Collaboration forum and not via message:
/community [original link is broken]
Please refer to Note 1028874 for more information.
ISSUE
Sample 2, Bank transaction:
I have build a Payment routine in Invoice A/R.
I am doing a Bank transfer in Payments regarding payment with Interact (direct Payment from Bank
account)
Here in Canada you have the possibility to withdraw money: Sample invoice cost $100, you can pay
$200 and receive $100 in cash.
Sample I like to do:
GL Account A $200 (Account Number, Debit )
GL Account B $100 (Account Number, Credit)
GL Account C $100 (Business Partner, Credit)
Any suggestion?
In the Payment describtion I can do a vPay.SplitTransaction = 0; but this is not working according to SAP Support.
SUMMARY
I like to do a payment and Split the Transaction into several accounts; like Journal Entry with reconciliation of the Journal Entry and the Invoice.
Thank you,
RuneHi Peter,
I do not want you to promote the future code; but the code in SDK as today.
Sample, it would look something like the code below from SDK help code; but please change the code to work as your sample.
Thank you,
Rune
vPay.Invoices.AppliedFC = 0
vPay.Invoices.AppliedSys = 0
vPay.Invoices.DocEntry = 8
vPay.Invoices.DocLine = 0
vPay.Invoices.DocRate = 0
vPay.Invoices.InvoiceType = 13
vPay.Invoices.LineNum = 0
vPay.Invoices.SumApplied = 5031.2
Call vPay.Invoices.Add
vPay.CardCode = vmp_CardCode_string;
vPay.DocDate = DateTime.Now;
vPay.JournalRemarks = "Incoming - Payment Bank Transfer";
vPay.TaxDate = DateTime.Now;
vPay.TransferAccount = vmp_BankAccount_string;// "_SYS00000000343";
vPay.TransferDate = DateTime.Now;
vPay.TransferReference = vmp_CardCode_string;
vPay.TransferSum = vmp_Amount_double;
vc_Message_Result_Int32 = vPay.Add(); -
Use Message Mapping to repeat top node and split Message
I am currently using XSLT Messaging to do majority of the mapping. I have two issues that remain, I need to repeat the top node for each or its child elements and split the message.
<Sensor xmlns="namespace">
<Observation>
<XML>Some Data</XML>
</Observation
</Sensor>
1. Can it be done in XSLT? I posted a thread asking for help on this. Where I would need to repeat the Sensor tag for each observation. If this is possible then I will able to split the messages at the HTTP adapter used for the target system.
2. Can Message Mapping be used. Using the same xsd on both source and target systems, Will sensor repeat if set to 1:n and observation set to 1:1. In source it is Sensor 1:1, Observation 1:n. If this works then I can use a BPM to shoot out individual messages using a for each.
Any help is greatly appreciated. I am fairly new to XI as this is my first major project. ThanksSource File Target File
> XI (XSLT) -
> Each observation as one file ---> Out via HTTP
Batch of all orders
Source File
<?xml version="1.0" encoding="ISO-8859-1"?>
<receipt>
<manufacturer>Manufacture Name</manufacturer>
<manufacturer_gln>999999</manufacturer_gln>
<transfer_recipt>0123456</transfer_recipt>
<prod>
<product_GTIN>99999999999999</product_GTIN >
<product_LOT>123456A</product_LOT >
<production_date>20090131</production_date>
<expire_date>20120131</expire_date>
<carrier>
<carrier_type>P</carrier_type>
<carrier_barcode>001</carrier_barcode>
<carrier_detail>
<carrier>
<carrier_type>C</carrier_type>
<carrier_barcode>01</carrier_barcode>
<carrier_detail>
<carrier><carrier_type>ITEM</carrier_type><carrier_barcode>0108699547010089211</carrier_barcode></carrier>
<carrier><carrier_type>ITEM</carrier_type><carrier_barcode>0108699547010089212</carrier_barcode></carrier>
<carrier><carrier_type>ITEM</carrier_type><carrier_barcode>0108699547010089213</carrier_barcode></carrier>
</carrier_detail>
</carrier>
<carrier>
<carrier_type>C</carrier_type>
<carrier_barcode>02</carrier_barcode>
<carrier_detail>
<carrier><carrier_type>ITEM</carrier_type><carrier_barcode>0108699547010089214</carrier_barcode></carrier>
<carrier><carrier_type>ITEM</carrier_type><carrier_barcode>0108699547010089215</carrier_barcode></carrier>
<carrier><carrier_type>ITEM</carrier_type><carrier_barcode>0108699547010089216</carrier_barcode></carrier>
</carrier_detail>
</carrier>
</carrier_detail>
</carrier>
<carrier>
<carrier_type>P</carrier_type>
<carrier_barcode>002</carrier_barcode>
<carrier_detail>
<carrier>
<carrier_type>C</carrier_type>
<carrier_barcode>03</carrier_barcode>
<carrier_detail>
<carrier><carrier_type>ITEM</carrier_type><carrier_barcode>0108699547010089217</carrier_barcode></carrier>
<carrier><carrier_type>ITEM</carrier_type><carrier_barcode>0108699547010089218</carrier_barcode></carrier>
<carrier><carrier_type>ITEM</carrier_type><carrier_barcode>0108699547010089219</carrier_barcode></carrier>
</carrier_detail>
</carrier>
<carrier>
<carrier_type>C</carrier_type>
<carrier_barcode>04</carrier_barcode>
<carrier_detail>
<carrier><carrier_type>ITEM</carrier_type><carrier_barcode>0108699547010089220</carrier_barcode></carrier>
<carrier><carrier_type>ITEM</carrier_type><carrier_barcode>0108699547010089221</carrier_barcode></carrier>
</carrier_detail>
</carrier>
</carrier_detail>
</carrier>
</prod>
</receipt>
Target File
<?xml version="1.0" encoding="UTF-8"?>
<pmlcore:Sensor xmlns:pmlcore="urn:autoid:specification:interchange:PMLCore:xml:schema:1" xmlns:pmluid="urn:autoid:specification:universal:Identifier:xml:schema:1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:autoid:specification:interchange:PMLCore:xml:schema:1 cases.xsd">
<pmluid:ID>GPO_AI_LU_DC</pmluid:ID>
<pmlcore:Observation>
<pmlcore:DateTime>2008-10-13T17:53:00.265+02:00</pmlcore:DateTime>
<pmlcore:Command>PACK</pmlcore:Command>
<pmlcore:Tag>
<pmluid:ID>01</pmluid:ID>
<pmlcore:Data>
<pmlcore:XML>
<Memory>
<DataField fieldName="EXPIRATION_DATE">20120131</DataField>
<DataField fieldName="BATCH_ID">123456A</DataField>
<DataField fieldName="ZMFG_DATE">20090131</DataField>
<DataField fieldName="ZMFG_GLN">999999</DataField>
<DataField fieldName="ZMANUFACTURER">Manufacture Name</DataField>
<DataField fieldName="ZITEM_COUNT">16</DataField>
<DataField fieldName="ZWORK_ORDER_NUMBER">0123456</DataField>
</Memory>
</pmlcore:XML>
</pmlcore:Data>
</pmlcore:Tag>
<pmlcore:Tag>
<pmluid:ID>0108699547010089211</pmluid:ID>
<pmlcore:Data>
<pmlcore:XML>
<Memory>
<DataField fieldName="EXPIRATION_DATE">20120131</DataField>
<DataField fieldName="BATCH_ID">123456A</DataField>
<DataField fieldName="ZMFG_DATE">20090131</DataField>
<DataField fieldName="ZMFG_GLN">999999</DataField>
<DataField fieldName="ZMANUFACTURER">Manufacture Name</DataField>
<DataField fieldName="ZWORK_ORDER_NUMBER">0123456</DataField>
</Memory>
</pmlcore:XML>
</pmlcore:Data>
</pmlcore:Tag>
</pmlcore:Observation>
</pmlcore:Sensor>
I have the XSLT Mapping working to transform data from Source file to target File. But my target file has multiple Observations that need to be split into the Sensor, Observation structure. -
C# Split xml file into multiple files
Below i have an xml file, in this file, i need to split this xml file into multiple xml files based on date column value,
suppose i have 10 records with 3 different dates then all unique date records should go into each file . for ex here i have a file with three dates my output should get 3 files while each file containing all records of unique date data. I didn't get any idea
to proceed on this, thats the reason am not posting any code.Needed urgently please
<XML>
<rootNode>
<childnode>
<date>2012-12-01</date>
<name>SSS</name>
</childnode>
<childnode>
<date>2012-12-01</date>
<name>SSS</name>
</childnode>
<childnode>
<date>2012-12-02</date>
<name>SSS</name>
</childnode>
<childnode>
<date>2012-12-03</date>
<name>SSS</name>
</childnode>
</rootNode>
</XML>Here is full code:
using System.Xml.Linq;
class curEntity
public DateTime Date;
public string Name;
public curEntity(DateTime _Date, string _Name)
Date = _Date;
Name = _Name;
static void Main(string[] args)
XElement xmlTree = new XElement("XML",
new XElement("rootNode",
new XElement("childnode",
new XElement("date"),
new XElement("name")
string InfilePath = @"C:\temp\1.xml";
string OutFilePath = @"C:\temp\1_";
XDocument xmlDoc = XDocument.Load(InfilePath);
List<curEntity> lst = xmlDoc.Element("XML").Element("rootNode").Elements("childnode")
.Select(element => new curEntity(Convert.ToDateTime(element.Element("date").Value), element.Element("name").Value))
.ToList();
var unique = lst.GroupBy(i => i.Date).Select(i => i.Key);
foreach (DateTime dt in unique)
List<curEntity> CurEntities = lst.FindAll(x => x.Date == dt);
XElement outXML = new XElement("XML",
new XElement("rootNode")
foreach(curEntity ce in CurEntities)
outXML.Element("rootNode").Add(new XElement("childnode",
new XElement("date", ce.Date.ToString("yyyy-MM-dd")),
new XElement("name", ce.Name)
outXML.Save(OutFilePath + dt.ToString("yyyy-MM-dd") + ".xml");
Console.WriteLine("Done");
Console.ReadKey(); -
There is misleading information in two system views (sys.data_spaces & sys.destination_data_spaces) about the physical location of data after a partitioning MERGE and before an INDEX REBUILD operation on a partitioned table. In SQL Server 2012 SP1 CU6,
the script below (SQLCMD mode, set DataDrive & LogDrive variables for the runtime environment) will create a test database with file groups and files to support a partitioned table. The partition function and scheme spread the test data across
4 files groups, an empty partition, file group and file are maintained at the start and end of the range. A problem occurs after the SWITCH and MERGE RANGE operations, the views sys.data_spaces & sys.destination_data_spaces show the logical, not the physical,
location of data.
--=================================================================================
-- PartitionLabSetup_RangeRight.sql
-- 001. Create test database
-- 002. Add file groups and files
-- 003. Create partition function and schema
-- 004. Create and populate a test table
--=================================================================================
USE [master]
GO
-- 001 - Create Test Database
:SETVAR DataDrive "D:\SQL\Data\"
:SETVAR LogDrive "D:\SQL\Logs\"
:SETVAR DatabaseName "workspace"
:SETVAR TableName "TestTable"
-- Drop if exists and create Database
IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
BEGIN
ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE $(DatabaseName)
END
CREATE DATABASE $(DatabaseName)
ON
( NAME = $(DatabaseName)_data,
FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
SIZE = 10,
MAXSIZE = 500,
FILEGROWTH = 5 )
LOG ON
( NAME = $(DatabaseName)_log,
FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
SIZE = 5MB,
MAXSIZE = 5000MB,
FILEGROWTH = 5MB ) ;
GO
-- 002. Add file groups and files
--:SETVAR DatabaseName "workspace"
--:SETVAR TableName "TestTable"
--:SETVAR DataDrive "D:\SQL\Data\"
--:SETVAR LogDrive "D:\SQL\Logs\"
DECLARE @nSQL NVARCHAR(2000) ;
DECLARE @x INT = 1;
WHILE @x <= 6
BEGIN
SELECT @nSQL =
'ALTER DATABASE $(DatabaseName)
ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
ALTER DATABASE $(DatabaseName)
ADD FILE
NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
EXEC sp_executeSQL @nSQL;
SET @x = @x + 1;
END
-- 003. Create partition function and schema
--:SETVAR TableName "TestTable"
--:SETVAR DatabaseName "workspace"
USE $(DatabaseName);
CREATE PARTITION FUNCTION $(TableName)_func (int)
AS RANGE RIGHT FOR VALUES
0,
15,
30,
45,
60
CREATE PARTITION SCHEME $(TableName)_scheme
AS
PARTITION $(TableName)_func
TO
$(TableName)_fg1,
$(TableName)_fg2,
$(TableName)_fg3,
$(TableName)_fg4,
$(TableName)_fg5,
$(TableName)_fg6
-- Create TestTable
--:SETVAR TableName "TestTable"
--:SETVAR BackupDrive "D:\SQL\Backups\"
--:SETVAR DatabaseName "workspace"
CREATE TABLE [dbo].$(TableName)(
[Partition_PK] [int] NOT NULL,
[GUID_PK] [uniqueidentifier] NOT NULL,
[CreateDate] [datetime] NULL,
[CreateServer] [nvarchar](50) NULL,
[RandomNbr] [int] NULL,
CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
[Partition_PK] ASC,
[GUID_PK] ASC
) ON $(TableName)_scheme(Partition_PK)
) ON $(TableName)_scheme(Partition_PK)
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
-- 004. Create and populate a test table
-- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
--:SETVAR TableName "TestTable"
SET NOCOUNT ON;
DECLARE @Now DATETIME = GETDATE()
WHILE @Now > DATEADD(minute,-1,GETDATE())
BEGIN
INSERT INTO [dbo].$(TableName)
([Partition_PK]
,[RandomNbr])
VALUES
DATEPART(second,GETDATE())
,ROUND((RAND() * 100),0)
END
-- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
SELECT
N'DatabaseName' = DB_NAME()
, N'SchemaName' = s.name
, N'TableName' = o.name
, N'IndexName' = i.name
, N'IndexType' = i.type_desc
, N'PartitionScheme' = ps.name
, N'DataSpaceName' = ds.name
, N'DataSpaceType' = ds.type_desc
, N'PartitionFunction' = pf.name
, N'PartitionNumber' = dds.destination_id
, N'BoundaryValue' = prv.value
, N'RightBoundary' = pf.boundary_value_on_right
, N'PartitionFileGroup' = ds2.name
, N'RowsOfData' = p.[rows]
FROM
sys.objects AS o
INNER JOIN sys.schemas AS s
ON o.[schema_id] = s.[schema_id]
INNER JOIN sys.partitions AS p
ON o.[object_id] = p.[object_id]
INNER JOIN sys.indexes AS i
ON p.[object_id] = i.[object_id]
AND p.index_id = i.index_id
INNER JOIN sys.data_spaces AS ds
ON i.data_space_id = ds.data_space_id
INNER JOIN sys.partition_schemes AS ps
ON ds.data_space_id = ps.data_space_id
INNER JOIN sys.partition_functions AS pf
ON ps.function_id = pf.function_id
LEFT OUTER JOIN sys.partition_range_values AS prv
ON pf.function_id = prv.function_id
AND p.partition_number = prv.boundary_id
LEFT OUTER JOIN sys.destination_data_spaces AS dds
ON ps.data_space_id = dds.partition_scheme_id
AND p.partition_number = dds.destination_id
LEFT OUTER JOIN sys.data_spaces AS ds2
ON dds.data_space_id = ds2.data_space_id
ORDER BY
DatabaseName
,SchemaName
,TableName
,IndexName
,PartitionNumber
--=================================================================================
-- SECTION 2 - SWITCH OUT
-- 001 - Create TestTableOut
-- 002 - Switch out partition in range 0-14
-- 003 - Merge range 0 -29
-- 001. TestTableOut
:SETVAR TableName "TestTable"
IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
DROP TABLE [dbo].[$(TableName)Out]
CREATE TABLE [dbo].[$(TableName)Out](
[Partition_PK] [int] NOT NULL,
[GUID_PK] [uniqueidentifier] NOT NULL,
[CreateDate] [datetime] NULL,
[CreateServer] [nvarchar](50) NULL,
[RandomNbr] [int] NULL,
CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
[Partition_PK] ASC,
[GUID_PK] ASC
) ON $(TableName)_fg2;
GO
-- 002 - Switch out partition in range 0-14
--:SETVAR TableName "TestTable"
ALTER TABLE dbo.$(TableName)
SWITCH PARTITION 2 TO dbo.$(TableName)Out;
-- 003 - Merge range 0 - 29
--:SETVAR TableName "TestTable"
ALTER PARTITION FUNCTION $(TableName)_func()
MERGE RANGE (15);
-- Confirm table partitioning
-- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
SELECT
N'DatabaseName' = DB_NAME()
, N'SchemaName' = s.name
, N'TableName' = o.name
, N'IndexName' = i.name
, N'IndexType' = i.type_desc
, N'PartitionScheme' = ps.name
, N'DataSpaceName' = ds.name
, N'DataSpaceType' = ds.type_desc
, N'PartitionFunction' = pf.name
, N'PartitionNumber' = dds.destination_id
, N'BoundaryValue' = prv.value
, N'RightBoundary' = pf.boundary_value_on_right
, N'PartitionFileGroup' = ds2.name
, N'RowsOfData' = p.[rows]
FROM
sys.objects AS o
INNER JOIN sys.schemas AS s
ON o.[schema_id] = s.[schema_id]
INNER JOIN sys.partitions AS p
ON o.[object_id] = p.[object_id]
INNER JOIN sys.indexes AS i
ON p.[object_id] = i.[object_id]
AND p.index_id = i.index_id
INNER JOIN sys.data_spaces AS ds
ON i.data_space_id = ds.data_space_id
INNER JOIN sys.partition_schemes AS ps
ON ds.data_space_id = ps.data_space_id
INNER JOIN sys.partition_functions AS pf
ON ps.function_id = pf.function_id
LEFT OUTER JOIN sys.partition_range_values AS prv
ON pf.function_id = prv.function_id
AND p.partition_number = prv.boundary_id
LEFT OUTER JOIN sys.destination_data_spaces AS dds
ON ps.data_space_id = dds.partition_scheme_id
AND p.partition_number = dds.destination_id
LEFT OUTER JOIN sys.data_spaces AS ds2
ON dds.data_space_id = ds2.data_space_id
ORDER BY
DatabaseName
,SchemaName
,TableName
,IndexName
,PartitionNumber
The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
The T-SQL code below illustrates the problem.
-- PartitionLab_RangeRight
USE workspace;
DROP TABLE dbo.TestTableOut;
USE master;
ALTER DATABASE workspace
REMOVE FILE TestTable_f3 ;
-- ERROR
--Msg 5042, Level 16, State 1, Line 1
--The file 'TestTable_f3 ' cannot be removed because it is not empty.
ALTER DATABASE workspace
REMOVE FILE TestTable_f2 ;
-- Works surprisingly!!
use workspace;
ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
--Msg 622, Level 16, State 3, Line 2
--The filegroup "TestTable_fg2" has no files assigned to it. Tables, indexes, text columns, ntext columns, and image columns cannot be populated on this filegroup until a file is added.
--The statement has been terminated.
If you run ALTER INDEX REBUILD before trying to remove files from File Group 3, it works. Rerun the database setup script then the code below.
-- RANGE RIGHT
-- Rerun PartitionLabSetup_RangeRight.sql before the code below
USE workspace;
DROP TABLE dbo.TestTableOut;
ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
USE master;
ALTER DATABASE workspace
REMOVE FILE TestTable_f3;
-- Works as expected!!
The file in File Group 2 appears to contain data but it can be dropped. Although the system views are reporting the data in File Group 2, it still physically resides in File Group 3 and isn’t moved until the index is rebuilt. The RANGE RIGHT function means
the left file group (File Group 2) is retained when splitting ranges.
RANGE LEFT would have retained the data in File Group 3 where it already resided, no INDEX REBUILD is necessary to effectively complete the MERGE operation. The script below implements the same partitioning strategy (data distribution between partitions)
on the test table but uses different boundary definitions and RANGE LEFT.
--=================================================================================
-- PartitionLabSetup_RangeLeft.sql
-- 001. Create test database
-- 002. Add file groups and files
-- 003. Create partition function and schema
-- 004. Create and populate a test table
--=================================================================================
USE [master]
GO
-- 001 - Create Test Database
:SETVAR DataDrive "D:\SQL\Data\"
:SETVAR LogDrive "D:\SQL\Logs\"
:SETVAR DatabaseName "workspace"
:SETVAR TableName "TestTable"
-- Drop if exists and create Database
IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
BEGIN
ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE $(DatabaseName)
END
CREATE DATABASE $(DatabaseName)
ON
( NAME = $(DatabaseName)_data,
FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
SIZE = 10,
MAXSIZE = 500,
FILEGROWTH = 5 )
LOG ON
( NAME = $(DatabaseName)_log,
FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
SIZE = 5MB,
MAXSIZE = 5000MB,
FILEGROWTH = 5MB ) ;
GO
-- 002. Add file groups and files
--:SETVAR DatabaseName "workspace"
--:SETVAR TableName "TestTable"
--:SETVAR DataDrive "D:\SQL\Data\"
--:SETVAR LogDrive "D:\SQL\Logs\"
DECLARE @nSQL NVARCHAR(2000) ;
DECLARE @x INT = 1;
WHILE @x <= 6
BEGIN
SELECT @nSQL =
'ALTER DATABASE $(DatabaseName)
ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
ALTER DATABASE $(DatabaseName)
ADD FILE
NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
EXEC sp_executeSQL @nSQL;
SET @x = @x + 1;
END
-- 003. Create partition function and schema
--:SETVAR TableName "TestTable"
--:SETVAR DatabaseName "workspace"
USE $(DatabaseName);
CREATE PARTITION FUNCTION $(TableName)_func (int)
AS RANGE LEFT FOR VALUES
-1,
14,
29,
44,
59
CREATE PARTITION SCHEME $(TableName)_scheme
AS
PARTITION $(TableName)_func
TO
$(TableName)_fg1,
$(TableName)_fg2,
$(TableName)_fg3,
$(TableName)_fg4,
$(TableName)_fg5,
$(TableName)_fg6
-- Create TestTable
--:SETVAR TableName "TestTable"
--:SETVAR BackupDrive "D:\SQL\Backups\"
--:SETVAR DatabaseName "workspace"
CREATE TABLE [dbo].$(TableName)(
[Partition_PK] [int] NOT NULL,
[GUID_PK] [uniqueidentifier] NOT NULL,
[CreateDate] [datetime] NULL,
[CreateServer] [nvarchar](50) NULL,
[RandomNbr] [int] NULL,
CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
[Partition_PK] ASC,
[GUID_PK] ASC
) ON $(TableName)_scheme(Partition_PK)
) ON $(TableName)_scheme(Partition_PK)
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
-- 004. Create and populate a test table
-- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
--:SETVAR TableName "TestTable"
SET NOCOUNT ON;
DECLARE @Now DATETIME = GETDATE()
WHILE @Now > DATEADD(minute,-1,GETDATE())
BEGIN
INSERT INTO [dbo].$(TableName)
([Partition_PK]
,[RandomNbr])
VALUES
DATEPART(second,GETDATE())
,ROUND((RAND() * 100),0)
END
-- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
SELECT
N'DatabaseName' = DB_NAME()
, N'SchemaName' = s.name
, N'TableName' = o.name
, N'IndexName' = i.name
, N'IndexType' = i.type_desc
, N'PartitionScheme' = ps.name
, N'DataSpaceName' = ds.name
, N'DataSpaceType' = ds.type_desc
, N'PartitionFunction' = pf.name
, N'PartitionNumber' = dds.destination_id
, N'BoundaryValue' = prv.value
, N'RightBoundary' = pf.boundary_value_on_right
, N'PartitionFileGroup' = ds2.name
, N'RowsOfData' = p.[rows]
FROM
sys.objects AS o
INNER JOIN sys.schemas AS s
ON o.[schema_id] = s.[schema_id]
INNER JOIN sys.partitions AS p
ON o.[object_id] = p.[object_id]
INNER JOIN sys.indexes AS i
ON p.[object_id] = i.[object_id]
AND p.index_id = i.index_id
INNER JOIN sys.data_spaces AS ds
ON i.data_space_id = ds.data_space_id
INNER JOIN sys.partition_schemes AS ps
ON ds.data_space_id = ps.data_space_id
INNER JOIN sys.partition_functions AS pf
ON ps.function_id = pf.function_id
LEFT OUTER JOIN sys.partition_range_values AS prv
ON pf.function_id = prv.function_id
AND p.partition_number = prv.boundary_id
LEFT OUTER JOIN sys.destination_data_spaces AS dds
ON ps.data_space_id = dds.partition_scheme_id
AND p.partition_number = dds.destination_id
LEFT OUTER JOIN sys.data_spaces AS ds2
ON dds.data_space_id = ds2.data_space_id
ORDER BY
DatabaseName
,SchemaName
,TableName
,IndexName
,PartitionNumber
--=================================================================================
-- SECTION 2 - SWITCH OUT
-- 001 - Create TestTableOut
-- 002 - Switch out partition in range 0-14
-- 003 - Merge range 0 -29
-- 001. TestTableOut
:SETVAR TableName "TestTable"
IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
DROP TABLE [dbo].[$(TableName)Out]
CREATE TABLE [dbo].[$(TableName)Out](
[Partition_PK] [int] NOT NULL,
[GUID_PK] [uniqueidentifier] NOT NULL,
[CreateDate] [datetime] NULL,
[CreateServer] [nvarchar](50) NULL,
[RandomNbr] [int] NULL,
CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
[Partition_PK] ASC,
[GUID_PK] ASC
) ON $(TableName)_fg2;
GO
-- 002 - Switch out partition in range 0-14
--:SETVAR TableName "TestTable"
ALTER TABLE dbo.$(TableName)
SWITCH PARTITION 2 TO dbo.$(TableName)Out;
-- 003 - Merge range 0 - 29
:SETVAR TableName "TestTable"
ALTER PARTITION FUNCTION $(TableName)_func()
MERGE RANGE (14);
-- Confirm table partitioning
-- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
SELECT
N'DatabaseName' = DB_NAME()
, N'SchemaName' = s.name
, N'TableName' = o.name
, N'IndexName' = i.name
, N'IndexType' = i.type_desc
, N'PartitionScheme' = ps.name
, N'DataSpaceName' = ds.name
, N'DataSpaceType' = ds.type_desc
, N'PartitionFunction' = pf.name
, N'PartitionNumber' = dds.destination_id
, N'BoundaryValue' = prv.value
, N'RightBoundary' = pf.boundary_value_on_right
, N'PartitionFileGroup' = ds2.name
, N'RowsOfData' = p.[rows]
FROM
sys.objects AS o
INNER JOIN sys.schemas AS s
ON o.[schema_id] = s.[schema_id]
INNER JOIN sys.partitions AS p
ON o.[object_id] = p.[object_id]
INNER JOIN sys.indexes AS i
ON p.[object_id] = i.[object_id]
AND p.index_id = i.index_id
INNER JOIN sys.data_spaces AS ds
ON i.data_space_id = ds.data_space_id
INNER JOIN sys.partition_schemes AS ps
ON ds.data_space_id = ps.data_space_id
INNER JOIN sys.partition_functions AS pf
ON ps.function_id = pf.function_id
LEFT OUTER JOIN sys.partition_range_values AS prv
ON pf.function_id = prv.function_id
AND p.partition_number = prv.boundary_id
LEFT OUTER JOIN sys.destination_data_spaces AS dds
ON ps.data_space_id = dds.partition_scheme_id
AND p.partition_number = dds.destination_id
LEFT OUTER JOIN sys.data_spaces AS ds2
ON dds.data_space_id = ds2.data_space_id
ORDER BY
DatabaseName
,SchemaName
,TableName
,IndexName
,PartitionNumber
The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
The data in the File and File Group to be dropped (File Group 2) has already been switched out; File Group 3 contains the data so no index rebuild is needed to move data and complete the MERGE.
RANGE RIGHT would not be a problem in a ‘Sliding Window’ if the same file group is used for all partitions, when they are created and dropped it introduces a dependency on full index rebuilds. Larger tables are typically partitioned and a full index rebuild
might be an expensive operation. I’m not sure how a RANGE RIGHT partitioning strategy could be implemented, with an ascending partitioning key, using multiple file groups without having to move data. Using a single file group (multiple files) for all partitions
within a table would avoid physically moving data between file groups; no index rebuild would be necessary to complete a MERGE and system views would accurately reflect the physical location of data.
If a RANGE RIGHT partition function is used, the data is physically in the wrong file group after the MERGE assuming a typical ascending partitioning key, and the 'Data Spaces' system views might be misleading. Thanks to Manuj and Chris for a lot of help
investigating this.
NOTE 10/03/2014 - The solution
The solution is so easy it's embarrassing, I was using the wrong boundary points for the MERGE (both RANGE LEFT & RANGE RIGHT) to get rid of historic data.
-- Wrong Boundary Point Range Right
--ALTER PARTITION FUNCTION $(TableName)_func()
--MERGE RANGE (15);
-- Wrong Boundary Point Range Left
--ALTER PARTITION FUNCTION $(TableName)_func()
--MERGE RANGE (14);
-- Correct Boundary Pounts for MERGE
ALTER PARTITION FUNCTION $(TableName)_func()
MERGE RANGE (0); -- or -1 for RANGE LEFT
The empty, switched out partition (on File Group 2) is then MERGED with the empty partition maintained at the start of the range and no data movement is necessary. I retract the suggestion that a problem exists with RANGE RIGHT Sliding Windows using multiple
file groups and apologize :-)Hi Paul Brewer,
Thanks for your post and glad to hear that the issue is resolved. It is kind of you post a reply to share your solution. That way, other community members could benefit from your sharing.
Regards.
Sofiya Li
Sofiya Li
TechNet Community Support -
Issue with using year-from-dateTime function
Hi
I am novice on XML and related technologies and as luch would have it I need to get a value in YYYY-MM-DD irrespective of input format.
I am using the Xpath functions year-from-dateTime,month-from-dateTime etc to split up the input value and put the net in the required format.
When I am trying to use fn:year-from-dateTime to get year from a variable that consists of "29/07/09" as below
{color:#0000ff}*<xsl:value-of select="fn:year-from-dateTime(xs:date('29/07/09'))">*{color}
where 'fn' prefix refers to http://www.w3.org/2005/02/xpath-functions
and 'xs' prefix refers to http://www.w3.org/2001/XMLSchema
and I am getting below error while running transformation from java...
{color:#ff0000}*'The first argument to the non-static Java function 'date' is not a valid object reference.'*
FATAL ERROR: 'Could not compile stylesheet'{color}
I dont know of any method "date" in java
Can any body help me on this...I have been breaking my head on this since yesterday.."BJWILD" <[email protected]> wrote in
message
news:g7dqkq$bqn$[email protected]..
> Im trying to create a functino that takes an array as a
parameter and give
> it a
> default value of an empty array however im getting 1047:
Parameter
> initializer
> is unknown with a number of ways I have tried.
>
> eg:
>
> public function foo( myArray:Array = [] ):void{
> // do stuff with the array
> }
>
> generated this error:
> 1047: Parameter initializer unknown or is not a
compile-time constant.
>
> using myArray:Array = new Array() gives the same
error...
>
>
> Anyone else come across this?
try
public function foo(myArray:Array=null):void{
if (myArray==null){
myArray=new Array();
not sure why what you tried isn't working, but this is one
way arround it... -
Is it possible to easily perform a comparison between dateTime objects for the condition of a while expression?
I've tried various things, such as comparing two dateTime variables with <=, but the only thing I can get to work is to extract and compare the date/time/second elements individually.
Regards,
TobyOk, so the best I've come up with so far is:
1. to split the two dates that need to be compared into there hour/minute/second components, and compare these as numbers.
2. create a custom XPath function in Java to perform the comparison.
Any other ideas?
Toby -
Query to split one row to multiple based on date range
Hi,
I need to split single row into multple based on date range defined in a column, start_dt and end_dt
I have a data
ID From date End_dt measure
1 2013-12-01 2013-12-03 1
1 2013-12-04 2013-12-06 2
2 2013-12-01 2013-12-02 11
3 2013-12-03 2013-12-04 22
I required output as
ID Date measure
1 2013-12-01 1
1 2013-12-02 1
1 2013-12-03 1
1 2013-12-04 2
1 2013-12-05 2
1 2013-12-06 2
2 2013-12-01 11
2 2013-12-02 11
3 2013-12-03 22
3 2013-12-04 22
Please provide me sq, query for the same
Amit
Please mark as answer if helpful
http://fascinatingsql.wordpress.com/Have a calendar table for example and then probably using UNION ALL from date and stat date JOIN the Calendar table
SELECT ID,From date FROM tbl
union all
SELECT ID,End_dt FROM tbl
with tmp(plant_date) as
select cast('20130101' as datetime)
union all
select plant_date + 1
from tmp
where plant_date < '20131231'
select*
from tmp
option (maxrecursion 0)
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Splitting one column into different columns.
Hello Experts,
How do i split datetime column into different columns while doing a Select statement.
Ex:
The column "REC_CRT_TS" has data like "2014-05-08 08:23:09.0000000".The datatype of this column is "DateTime". And i want it in SELECT statement like;
SELECT
YEAR(DATETIME) YEAR,
MONTH(DATETIME) MONTH,
DATENAME(DATETIME) MONTHNAME,
DATEPART(DATETIME) WEEKNUM,
DAY(DATETIME) DATE,
DATEPART(DATETIME) HOUR
FROM TABLE_NAME;
The output should look like this;
--YEAR| MONTH | MONTHNAME| WEEKNUM | DATE | HOUR
--2014| 5 | May | 25 | 08 |08
Any suggestions please.
Thanks!
RahmanI made a very quick research and I see in this blog post
http://www.jamesserra.com/archive/2011/08/microsoft-sql-server-parallel-data-warehouse-pdw-explained/
that It also uses its own query engine and not all features of SQL
Server are supported. So, you might not be able to use all your DBA tricks. And you wouldn’t want to build a solution against SQL Server and then just hope to upsize it to Parallel Data Warehouse Edition.
So, it is quite possible that this function doesn't exist in PDW version of SQL
Server. In this case you may want to implement case based month name or do it in the client application.
For every expert, there is an equal and opposite expert. - Becker's Law
My blog
My TechNet articles -
Splitting a get-Item LastwriteTime
I am trying to pull a number of variables to be used in various places in a script I have had trouble with the formatting of the strings
I organically came to the following conclusion but I am sure there is a better way!
$Date=Get-Item "C:\BOSS"|Select LastWriteTime|Format-Wide
$Datesimple=Get-Date -format Mdy
$Dates= $date | out-string
$sDates= $dates -split ("/")
$year = $sdates[2] -split (" ")
$mm = $sdates[0] | out-string
$d = $sdates[1] | out-string
$yyyy = $year[0] | out-string
$folderd = $mm+$d+$yyyy
$folderd = $folderd -replace "[^\d]","
This is probrably solvable in a single line of code I just couldn't figure out how![datetime]$Date = (Get-Item "C:\BOSS").LastWriteTimeGet the day
$date.day
Try to typecast to datetime object
Hope this helps -
HI Experts
I am facing a problem for Batch Split Addons . There are a parent Batch like as quantity as(100) .when i am going to split this parent Batch into number of child batch . Batch is split-ed successfully but stocks is increased according to splitting Batch (ex we create 2 child batch (50,50)) then stock quantity is became (200). How I can manage stock
and using code is
SAPbobsCOM.Recordset oRecSet = default(SAPbobsCOM.Recordset);
oRecSet = (SAPbobsCOM.Recordset)SBO_Company.GetBusinessObject(BoObjectTypes.BoRecordset);
GRPo = (SAPbobsCOM.Documents)oCompany.GetBusinessObject(SAPbobsCOM.BoObjectTypes.oInventoryGenEntry);
string sql = string.Empty;
GRPo.Lines.BatchNumbers.Quantity = Convert.ToDouble(oEditQty.Value.Trim());
GRPo.Lines.BatchNumbers.InternalSerialNumber = Convert.ToString(p);
GRPo.Lines.BatchNumbers.BatchNumber = oEditNBID.Value.Trim();
GRPo.Lines.BatchNumbers.ManufacturingDate = DateTime.Now;
GRPo.Lines.BatchNumbers.Add();
GRPo.Lines.BatchNumbers.SetCurrentLine(0);
GRPo.Lines.ItemCode = SplitBatch.ItmCode;
GRPo.Lines.ItemDescription = SplitBatch.ItmName;
GRPo.Lines.ShipDate = DateTime.Now;
GRPo.Lines.Quantity = Convert.ToDouble(oEditQty.Value.Trim());
GRPo.Lines.WarehouseCode = WhsCode.Selected.Value.ToString();
GRPo.Lines.Add();
GRPo.PaymentGroupCode = -1;
Regard
PushpendraHi
These quantity is coming from the matrix rows each row have a different quantity.so that is not a matter .and I applied a specific condition. quantity depend on three types ( Inspected qty, Non Inspected qty, Difference qty) all are depend on (ddl) in matrix. when i am going to split parent batch into child batch then child splited Batch is 10 qty then stock is increased 10 qty.
How i can manage these quantity.and stock could not be increased.
Regards
Pushpendra
Edited by: Pushpendra Yadav on Feb 14, 2012 10:06 AM
Maybe you are looking for
-
Hi everyone, I am having issues updating a clean Windows Server 2003 R2 Enterprise Edition 32 bits Service Pack 2, so any help with be appreciated cause I've already tried all my cards for the past 5 days in this particular issue without success. All
-
Creating a folder on hyperion workspace using a sqr job
Hi Gurus need some help, I have this code in the sqr to create a folder on hyperion 9.3.1 workspace. When i run this job and do a show on "#dirfile_status0". The value in it is 2 ,which mean it is saying that the folder doesnot exists, but i know the
-
Does anyone know if it is possible tu put more than 10 pictures in the adobe bridge cs4 or 5 flash photo gallery for website? And when yes, how? Thanks a lot! DM
-
Why we cant store asiciative array in DB
hi, plz provide me vaild reason for below query. its a complex question for me as i tried a lot to find suitable answer in net also but not satisfied. 1- why we cant store asiciative array in datbase?and we can store netted table in database how?
-
Specific Steps on Connecting to a Sybase Database
Any help on what specific database drivers to install and any more guidnace on thei sis appreciated. We are having issues establishing a connection. I did install jConnect6.x but still get an error that the driver class not found. So any more detaile