Doug Whitfield firstname.lastname@example.org
Minneapolis, MN, USA
I do A&R work for http://blocsonic.com in Minnesota, Kentucky, DMV (DC, Maryland, Virgina), and North Carolina. You may also find me at https://loadaverage.org/dawsports and https://joindiaspora.com/people/4d0edb1f2c17434702000595 Every Sunday at 8am US Central, you can find me at https://meet.jit.si/QuarantineMusicChat
- Does #apachetomcat have a code review process? I Know there are votes before releases, but I don't see any review at
https://gitbox.apache.org/repos/asf?p=tomcat.git;a=commit;h=76115a2a8681e5951aef9037120fa3babeffd9d3-- It may just be that I don't know much about gitbox though
- Been a while, mostly because low engagement here, but I've got an intractable issue. No help so far from the JanusGraph gitter.
Anybody got any ideas on the next direction to go after discovery that I have 11 threads waiting on this one?Here’s the offending thread. 0x00007f6ecc104b30 is the problem:"gremlin-server-exec-4" #134 prio=5 os_prio=0 tid=0x00007f6d48004800 nid=0x171a4a runnable [0x00007f6cd05ef000]java.lang.Thread.State: RUNNABLE…Locked ownable synchronizers:- <0x00007f6ecc104b30> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)- <0x00007f6ecee7c9b8> (a java.util.concurrent.ThreadPoolExecutor$Worker)These 11 threads are waiting on gremlin-server-exec-4 to give up that lock:gremlin-server-exec-2, gremlin-server-exec-60, gremlin-server-exec-58, gremlin-server-exec-54, gremlin-server-exec-49, gremlin-server-exec-47, gremlin-server-exec-43, gremlin-server-exec-32, gremlin-server-exec-16, gremlin-server-exec-14, gremlin-server-exec-12#java
if you prefer https://pastebin.com/uHgTW6Ym
- If anybody knows keycloak, I would love someone to jump in on this:
So you don't have to click through:
I've been supporting a Kerberos-integrated Keycloak use case recently, and I'd like to ask if the premise of the PR we'd like to submit is secure before submitting it. Some background on where we are at: I can use my Kerberos ticket on my browser to log in, and can also, using a forwardable ticket, obtain a JWT with curl, for example: curl -k --negotiate --delegation policy -u : --location --request POST 'https://keycloak.ipa.test/auth/realms/Test/protocol/openid-connect/token' --data-urlencode 'client_id=grafana.ipa.test' --data-urlencode 'client_secret=my-secret-token' -d 'grant_type=password' -d 'scope=openid profile email' -vvvv This gives me gss_delegation_credential in my JWT as I expect. In this case, I'm trying to access an OIDC-integrated grafana instance, and I've got FreeIPA doing Kerberos and LDAP. Everything here works. The trouble we ran into was that we'd like some of our existing software that obtains JWTs from Keycloak through the Java API wasn't integrated with Kerberos. So, for certain rigid deployments at customer sites, we need to provide that gss_delegation_credential to a service that expects GSSAPI. This software is not in a place where we can add Kerberos functionality quickly, so doing things like implementing a keytab and GSSAPI to imitate the success we've had with the above curl command is out of reach for us at the moment. What we'd like to do is ask Keycloak for a password-based grant, have that request get authenticated by Keycloak via Kerberos, and have Keycloak obtain and append the gss_delegation_credential to the JWT for us. We have a PR that we'd like to submit, and we've tested that it achieves the objectives that we've set out to accomplish. However, I'd like to ask the community if this is, prima facie, a supportable flow in Keycloak, or if we're violating some kind of Kerberos prime directive here :) Also in the realm of possibilities is the idea that Keycloak supports this but we're doing it wrong! What should our next steps be? Can Keycloak provide gss_delegation_credential in the returned JWT on behalf of a successful password grant if integrated with Kerberos? If not, does the community believe this could be done securely? If so, do you want us to submit a PR of what we already have? Thanks!
- Any thoughts from #apachekafka #kafka experts?
1. for a topic with 3 partitions, there is a broker lead for each partition and a single publisher (round-robin) will publish to all three broker leaders. Correct?
2. for a topic with 3 partitions and three publisher instances running, each publisher would use all three broker leaders. The three publisher instances wouldn’t be aligned specifically to one of the three broker leaders. Correct?
3. Rather than developing a model for dividing the work among publishers, wouldn’t the publisher offset internal topic manage that? it is supposed to help the publisher understand where it is in publishing events and where to start looking for the next event to publish. This would only work for a three publisher model if the publishers shared the internal publisher offset topic. I’m assuming that isn’t the case? each publisher instance would have its own internal offset topic?For the curious, here's an answer I got from a colleague:
#1 for a topic with 3 partitions, there is a broker lead for each partition and a single publisher (round-robin) will publish to all three broker leaders. Correct?
For a topic with 3 partitions, there is a broker-leader for each partition and a single publisher, should they configure their ProducerRecord (https://kafka.apache.org/25/javadoc/org/apache/kafka/clients/producer/ProducerRecord.html) to have neither a key nor a partition present then a partition will be assigned in a round-robin fashion. Because the topic partition is the unit of replication (https://kafka.apache.org/documentation/#replication), and each partition in Kafka has a single broker-leader for which all reads and writes go through that leader, it is correct to say that a single publisher will publish to all three broker leaders.
#2 for a topic with 3 partitions and three publisher instances running, each publisher would use all three broker leaders. Correct. The three publisher instances wouldn’t be aligned specifically to one of the three broker leaders. Correct?
If each publisher was configured to create a ProducerRecord<K,V> that went to a specific partition, then this might be the case. However, if the three publishers, totally independently and unaware of each other, were configured to create ProducerRecords that used a specific partition, this might not be the case. Ultimately, it is up to the producer to decide which partition they are writing to, or to trust the hashing or round-robin distribution assigned by the broker.
#3 Rather than developing a model for dividing the work among publishers, wouldn’t the publisher offset internal topic manage that? it is supposed to help the publisher understand where it is in publishing events and where to start looking for the next event to publish. This would only work for a three publisher model if the publishers shared the internal publisher offset topic. I’m assuming that isn’t the case? each publisher instance would have its own internal offset topic?
If you’re referring to the idea that a publisher can store its current offset in a Kafka topic, and coordinate with other publishers working in tandem with it by relying on the committed topic offset, I’d like to clarify some terms first. If a producer is also a consumer, the consumer facet or nature of this worker/thread/app/route is what contains the offset. If you were consuming files from a network share, for example, you might move them to a hidden folder or delete them when you were done consuming them for publication onto a kafka topic, a simple but effective way of marking which files have been consumed and published and which are ready for consumption. In Kafka, the offset refers to the latest available message in the topic. So here, I’m assuming you mean your producer is also a kafka consumer, and is doing some work like enrichment before publishing the completed work to another topic for further processing.
If you want to coordinate between consumer-producers in this way, I would create a single topic with a single message type, but publish messages from each consumer-producer with a header stating which one is which. Then, on startup of the consumer-producer, you can consume a message first from the offset topic, filter for the header to determine which offset message belongs to the individual worker, and then start a stream at the offset it acquires.Just got this from the customer:
"I don’t think I did a good job presenting point 3 to you. This was in the context of a Kafka Connect Publisher (Source).. I didn’t do a lot of homeowrk on this but the idea is that Kafka Connect keeps track of where the publisher is in publishing from a source (database). https://docs.confluent.io/platform/6.2.1/connect/javadocs/javadoc/org/apache/kafka/connect/source/So...
I think the response to point 3 was in regards to the standard Java Publisher API and not considering connect. Can I get an alternative response considering Kafka Connect Publishers?"
- It has been 3 months since anyone has commented on or shared any of my posts. I guess the pumpiverse is dead.
I've not been paying the best attention here since the campaign for city council has been keeping me busy as has the new redistricting madness that popped up. I've testified twice to the Ohio Redistricting Commission, drawn my own proposed congressional redistricting map, and am likely going to be testifying again in October.
Doug Whitfield likes this.
- something I'm investigating: https://twitter.com/dawsports/status/1433156252566163464