Showing posts with label common. Show all posts
Showing posts with label common. Show all posts

Friday, March 23, 2012

Multiple initiators to common target

Information for configuring Service Broker when you have multiple initiators seems to be thin on the ground.
The examples are all point-to-point and seem to require separate logins for each initiator and corresponding exchange of certificates using a multi-pass installation.
E.g. Rushi's ServiceListing API.

We have an application with several hundred workstations which need to send transaction data to department servers and then on to corporate. There is also a need to replicate small amounts of reference data to all levels.

The examples seem to generate an installation and administrative nightmare.

I have a question - would the following scheme work?

1. All our machines are on the same domain so transport security can be handled by Windows Security (One less set of certificates to exchange).

2. Generate 2 Certificates on the Corporate database and export them to the department and workstations
use these pre-installed certificates in each database to handle dialog security for all converations.
Ie. multiple workstations feed into one user id on the
department server using a single certificate. Ditto for all the department servers feeding into corporate.

3. Use Service broker id's to determine where messages are sent (as the same services would be installed at multiple locations).

4. When a database is intalled/restored at a workstation does this invalidate any certificates that are shipped with it?

Carl Berger wrote:

Information for configuring Service Broker when you have multiple initiators seems to be thin on the ground.
The examples are all point-to-point and seem to require separate logins for each initiator and corresponding exchange of certificates using a multi-pass installation.
E.g. Rushi's ServiceListing API.

We are working on an tool that uses the service listing concept that should make this much, much easier. My plan is to release it within days.

Carl Berger wrote:

We have an application with several hundred workstations which need to send transaction data to department servers and then on to corporate. There is also a need to replicate small amounts of reference data to all levels.
The examples seem to generate an installation and administrative nightmare.

You might want to consider a BCN (Broker Configuration Notice) Service for your environment: http://msdn2.microsoft.com/en-us/library/ms166054(SQL.90).aspx
This can take care of the service routes and service security settings, acting as a UDDI or Active Directory service (in fact it can be implemented as a wrapper around AD). The BCN service is interogated whenever a routes is missing or a remote service binding is missing. The BCN service itself can be configured to accept anonymous dialogs and validates the request by out of band means (e.g. validates that the requestor host machine is a machine member of your domain).

Carl Berger wrote:

1. All our machines are on the same domain so transport security can be handled by Windows Security (One less set of certificates to exchange).

Yes, this works. If you plan to use machine accounts instead of a domain user account for the SQL Serverr serviceaccount, my recommendation is to make all machines member of a Windows Group and then grant CONNECT permission on endpoints to this group, instead of individual machines. Note that using machine accounts (domain\machine$) requires (manual) registration of SPNs.

Carl Berger wrote:

2. Generate 2 Certificates on the Corporate database and export them to the department and workstations
use these pre-installed certificates in each database to handle dialog security for all converations.
Ie. multiple workstations feed into one user id on the
department server using a single certificate. Ditto for all the department servers feeding into corporate.

I believe the BCN service can implement this in a more manageable way. When it will receive a request for a missing REMOTE SERVICE BINDING it can create and deploy a certificate for the requesting service as part of processing the request.

Carl Berger wrote:

3. Use Service broker id's to determine where messages are sent (as the same services would be installed at multiple locations).

That's exactly the purpose of the broker id.

Carl Berger wrote:

4. When a database is intalled/restored at a workstation does this invalidate any certificates that are shipped with it?

No, certificates are not affected. However, private keys associated with certificates are stored encrypted with the database master key. The Service Broker background threads can access the database master key only if is encrypted with the service master key (since a background thread cannot have access to the password). So as part of the install/restore, these steps are necessary:
- open database master key using the password
- add the service master key encryption to the database master key

One note is that if you plan to use this install/restore to create new instances of a service, you'll have to generate a new broker id ( ALTER DATABASE ... SET NEW_BROKER).

HTH,
~ Remus

|||

The BCN service seems to be a good solution for setting up the route.

My proposal here was to create only two certificates to handle all dialog security and to preship them with an installation package for each workstation. There would be only a single certificate/remote-user at the target and a single predefined remote-user and certificate which is reused on every workstation.

The "Can you hear me now?" type of installation would be avoided, new workstations could be installed without a single change at the corporate end.

Comments?

|||

The BCN also is used for querying missing REMOTE SERVICE BINDINGs, so it can trigger all the steps for a security configuration deployment (certs). The BCN is not queried continously, a query is sent when an ROUTE or RSB is missing, and the BCN service is expected to actually create the missing route or RSB. So the BCN should normally be interogated only once per new workstation. I think the more appropiate analogy is plug-n-play configuration, rather than 'can you hear me now'.

Shiping a pre-packaged database, with all services, keys, code (including CLR assemblies) and permissions it sure is an attractive deployment solution. Unless you care about things like nonrepudiation of dialogs/messages (workstation cannot later deny that it sent a given message), having once cert/private key duplicated for each workstation works fine. Another thing to consider is that a private key shared by hundreds of workstation cannot be really considered private anymore, so you might as well use anonymous dialogs (they don't require private keys on the initiator side, traffic is encrypted).
Also, many of the problems avoided by shipping the database will come back when the certificates start expiring. Just to keep in mind...

HTH,
~ Remus

|||

We don't need repudiation. The messages are generated by a trusted sales application.

It looks like anonymous dialogs might be the best way to go.

Many thanks.

Wednesday, March 21, 2012

multiple fields from common lookup table

I haven't written SQL in some time and I'm extremely rusty, so help
would be appreciated and thanks in advance.
I have a table (main) that has multiple fields that contain a lookup id
to a common lookup table (lookup). How do I write the query to pull
back one result set that has the form below?
select ?
where main.key = @.key
result set
--
key (from main)
behavior description (from lookup)
appearance description (from lookup)
other description (from lookup)
main
--
key
behaviorID
appearanceID
otherID
lookup
--
lookupID
descriptionPlease post DDL, so that people do not have to guess what the keys,
constraints, Declarative Referential Integrity, data types, etc. in
your schema are. Sample data is also a good idea, along with clear
specifications. It is very hard to debug code when you do not let us
see it.|||Please post DDL, so that people do not have to guess what the keys,
constraints, Declarative Referential Integrity, data types, etc. in
your schema are. Sample data is also a good idea, along with clear
specifications. It is very hard to debug code when you do not let us
see it.|||Sba,
Try:
DECLARE @.KEYVAL INT
SET @.KEYVAL = 1
SELECT KEYVAL, L1.DESCRIPTION AS 'BEHAVIOR', L2.DESCRIPTION AS 'APPEARANCE',
L3.DESCRIPTION AS 'OTHER'
FROM MAIN M
JOIN LOOKUP L1
ON M.BEHAVORID = L1.LOOKUPID
JOIN LOOKUP L2
ON M.APPEARANCEID = L2.LOOKUPID
JOIN LOOKUP L3
ON M.OTHERID = L3.LOOKUPID
WHERE KEYVAL = @.KEYVAL
HTH
Jerry
"sba" <pub9@.s105192480.onlinehome.us> wrote in message
news:1129764599.221796.33700@.g49g2000cwa.googlegroups.com...
>I haven't written SQL in some time and I'm extremely rusty, so help
> would be appreciated and thanks in advance.
> I have a table (main) that has multiple fields that contain a lookup id
> to a common lookup table (lookup). How do I write the query to pull
> back one result set that has the form below?
> select ?
> where main.key = @.key
>
> result set
> --
> key (from main)
> behavior description (from lookup)
> appearance description (from lookup)
> other description (from lookup)
>
> main
> --
> key
> behaviorID
> appearanceID
> otherID
>
> lookup
> --
> lookupID
> description
>|||Looks like you have the famous One True Lookup Table. A classic database
design error.
Anyway, you can join multiple times to the same table:
SELECT M.key, T1.description, T2.description, T3.description
FROM main AS M
JOIN lookup AS T1
ON M.behaviourid = T1.id
JOIN lookup AS T2
ON M.appearanceid = T2.id
JOIN lookup AS T3
ON M.otherid = T3.id ;
David Portas
SQL Server MVP
--

Friday, March 9, 2012

Multiple data sources, single cube?

Newbie question...

I need to create a report with data coming from two entirely separate data sources, the only common factor being the date.

I can see that I could create a single DSV and cube by using views to point to the second database. But is it considered best practice to create a second cube and join the cubes with MDX?

Thanks

This is not so much of a technical question as a how you solve these problems generally. If there is a point of joining information in separate sources you should do it, not in the cubes, but in a data warehouse.

These integration problems should be handled in extract-, transfer-, load processes before you build a cube.

Have a look here(http://www.kimballgroup.com/) for support and ideas.

HTH

Thomas Ivarsson

|||

Thanks Thomas,

That ties in with advice from the Project REAL docs too.